Skip to main content

Notice: This Wiki is now read only and edits are no longer possible. Please see: https://gitlab.eclipse.org/eclipsefdn/helpdesk/-/wikis/Wiki-shutdown-plan for the plan.

Jump to: navigation, search

Platform-releng-faq-obsolete

Warning2.png
Note: Most of the data in this FAQ is obsolete, but it may still contain some interesting tid-bits.


Eclipse Platform Release Engineering FAQ

Contents

Is the Eclipse platform build process completely automated?

Yes, the Eclipse build process starts with a cron job on our main build machine that initiates a shell script that checks out the builder projects.

  • org.eclipse.releng.eclipsebuilder -> scripts to build each type of drop
  • org.eclipse.releng.basebuilder -> a subset of eclipse plugins required to run the build.
  • we also use several small cvs projects on an internal server that store login credentials, publishing information and jdks

Fetch and build scripts are generated automatically by the org.eclipse.pde.build bundle in org.eclipse.releng.basebuilder. Fetch scripts specify the version of code to retrieve from the repository. Build scripts are also generated automatically to compile all java code, determine compile order and manage dependencies. We create a master feature of all the non-test bundles and features used in the build. This feature is signed and packed, and the we create a p2 repo from the packed master feature. Metadata for the products is published to the repository. We then use the director application to provision the contents of the SDKs etc to different directories and then zip or tar them up. We also use custom build scripts that you can see in org.eclipse.releng.eclipsebuilder. After the SDKs are built, the automated junit and performance testing begins. Tests occur over ssh for Linux machines, and rsh for Windows machines. Each component team contributes their own tests. Once the tests are completed,the results are copied back to the build machine. Also, the images for the performance tests are generated.

What is the latest version of org.eclipse.releng.basebuilder?

See this document which describes correct tag of org.eclipse.releng.basebuilder for use in your builds.

I would like to recompile eclipse on a platform that is not currently supported. What is the best approach to this? How can I ensure that the community can use my work?

The best approach is use the source build drop and modify the scripts to support that you are interested in. Then open a bug with product Platform and component Releng attaching the patches to the scripts that were required to build the new drop. If you are interested in contributing a port to eclipse, here is the procedure.


How does the platform team sign their builds?

See Platform-releng-signedbuild

How long does the build take to complete?

It takes three hours and 20 minutes for all the drops to be produced. JUnit (eight hours) and performance tests (twelve hours) run in parallel after the windows, mac, linux and sdk tests drops have been built. It takes another two hours for the performance results to be generated.

When is the next build?

Please refer to the build schedule.

When is the next milestone?

Please refer to the Eclipse Platform Project Plan.

I noticed that you promoted an integration build to a milestone build. The integration build had test failures but the milestone builds doesn't show any. Why is this?

See bug 134413.

We have a number of tests that intermittently fail. The reasons are

  • issues with the tests themselves
  • tests subject to timing issues
  • tests that fail intermittently due to various conditions

The component teams are always trying to fix their tests but unfortunately they are still some issues. When we promote a build to a milestone, we rerun the tests that failed. Many pass on the second time because the tests initially failed due to a timing issue or intermittent condition. Or a team will have a broken test that doesn't warrant a rebuild for a milestone. In that case, the releng team sprinkles pixie dust over the build page to erase the red Xes, but leaves the appropriate build failures intact.

Scripts to promote a 4.x stream build

ssh to build.eclipse.org as pwebster

./promote4x.sh ${buildId} ${milestonename}

Example ./promote4x.sh I20110916-1615 M2

update the index.html to reflect the new build

/home/data/httpd/download.eclipse.org/eclipse/downloads/index.html
/home/data/httpd/download.eclipse.org/e4/downloads/index.html

How do I run the JUnit tests used in the build?

With every build, there is a eclipse-Automated-Tests-${buildId}.zip file. You can follow the instructions associated with this zip to run the JUnit tests on the platform of your choice.

How do you run the tests on the Windows machine via rsh

To run the windows tests from the build machine,

rsh ejwin2 start /min /wait c:\\buildtest\\N20081120-2000\\eclipse-testing\\testAll.bat c:\\buildtest\\N20081120-2000\\eclipse-testing winxplog.txt

How do I set up performance tests?

Refer to the performance tests how-to or the even better Performance First talk at Eclipse Con 2007.

Baseline tests are run from a branch of the builder.

  • 3.8 builds are compared against 3.4 baselines in the perf_37x branch of org.eclipse.releng
  • 3.7 builds are compared against 3.4 baselines in the perf_36x branch of org.eclipse.releng
  • 3.6 builds are compared against 3.4 baselines in the perf_35x branch of org.eclipse.releng
  • 3.5 builds are compared against 3.4 baselines in the perf_34x branch of org.eclipse.releng
  • 3.4 builds are compared against 3.3 baselines in the perf_33x branch of org.eclipse.releng
  • 3.3 builds are compared against 3.2 baselines in the perf_32x branch of org.eclipse.releng

How do I find data for old performance results on existing build page?

Refer to the "Raw data and Stats" link for a specific test.

For instance for 3.3.1.1 results, click here

Then look at the jdt ui tests.

Then click on a specific test.

Then click "Raw data and Stats".

You will see the data for previous builds.

How do I run the performance tests from the zips provided on the download page?

See here.

Process to implement a new baseline

Implement new performance baseline after a release.

How to regenerate performance results from build artifacts

This assumes that the artifacts used to run the build and the resulting build directory itself still exist on the build machine.

  • cd to build directory on build machine
  • cd org.eclipse.releng.eclipsebuilder
  • apply patch to builder in bug [256297]
  • rerun generation on hacked builder
  • on build machine
    • at now
    • cd /builds/${buildId}/org.eclipse.releng.eclipsebuilder; sh command.txt
    • ctrl d


The process needs to be executed using at or cron if you are logged in remotely because the swt libraries are needed to run the generation. If you run the process while logged in remotely via ssh, the process will fail with "No more handles". The output logs for this process are in the posting directory under buildlogs/perfgen*.txt.

The user that's running the tests on the build machine needs run "xhost +" in a terminal on the build machine.

How to regenerate performance results from build artifacts

This assumes that the artifacts used to run the build and the resulting build directory itself still exist on the build machine.

  • cd to build directory on build machine
  • cd org.eclipse.releng.eclipsebuilder
  • apply patch to builder in bug [256297]
  • rerun generation on hacked builder
  • on build machine
    • at now
    • cd /builds/${buildId}/org.eclipse.releng.eclipsebuilder; sh command.txt
    • ctrl d


The process needs to be executed using at or cron if you are logged in remotely because the swt libraries are needed to run the generation. If you run the process while logged in remotely via ssh, the process will fail with "No more handles". The output logs for this process are in the posting directory under buildlogs/perfgen*.txt.

The user that's running the tests on the build machine needs run "xhost +" in a terminal on the build machine.


How performance tests are invoked in the build

Performance tests run if the -skipTest or -skipPerf parameters isn't passed to the build when running it. Both JUnit and performance tests are invoked in the build by the testAll target in the org.eclipse.releng.eclipsebuilder/buildAll.xml

<target name="testAll" unless="skip.tests">
		<waitfor maxwait="4" maxwaitunit="hour" checkevery="1" checkeveryunit="minute">
			<and>
				<available file="${postingDirectory}/${buildLabel}/checksum/eclipse-Automated-Tests-${buildId}.zip.md5" />
				<available file="${postingDirectory}/${buildLabel}/checksum/eclipse-SDK-${buildId}-win32.zip.md5" />
				<available file="${postingDirectory}/${buildLabel}/checksum/eclipse-SDK-${buildId}-linux-gtk.tar.gz.md5" />
				<available file="${postingDirectory}/${buildLabel}/checksum/eclipse-SDK-${buildId}-macosx-cocoa.tar.gz.md5" />
				<available file="${postingDirectory}/${buildLabel}/checksum/eclipse-${buildId}-delta-pack.zip.md5" />
				<available file="${postingDirectory}/${buildLabel}/checksum/eclipse-platform-${buildId}-win32.zip.md5" />
				<available file="${postingDirectory}/${buildLabel}/checksum/eclipse-platform-${buildId}-linux-gtk.tar.gz.md5" />
				<available file="${postingDirectory}/${buildLabel}/checksum/eclipse-platform-${buildId}-macosx-cocoa.tar.gz.md5" />
			</and>
		</waitfor>

		<property name="cvstest.properties" value="${base.builder}/../eclipseInternalBuildTools/cvstest.properties" />
		<antcall target="configure.team.cvs.test" />

		<!--replace buildid in vm.properties for JVM location settings-->
		<replace dir="${eclipse.build.configs}/sdk.tests/testConfigs" token="@buildid@" value="${buildId}" includes="**/vm.properties" />

		<antcall target="addnoperfmarker" />

		<parallel>
			<antcall target="test-JUnit" />
			<antcall target="test-performance" />
		</parallel>
	</target>

The test-performance target in the buildAll.xml looks like this

	<target name="test-performance" unless="skip.performance.tests">

		<echo message="Starting performance tests." />
		<property name="dropLocation" value="${postingDirectory}" />
		<ant antfile="testAll.xml" dir="${eclipse.build.configs}/sdk.tests/testConfigs" target="performanceTests" />
		<antcall target="generatePerformanceResults" />
	</target>


This calls the testAll.xml in org.eclipse.releng.eclipsebuilder/sdk.tests/testConfigs

<target name="performanceTests">

		<condition property="internalPlugins" value="../../../eclipsePerformanceBuildTools/plugins">
			<isset property="performance.base" />
		</condition>

		<property name="testResults" value="${postingDirectory}/${buildLabel}/performance" />
		<mkdir dir="${testResults}" />

		<parallel>
			<antcall target="test">
				<param name="tester" value="${basedir}/win32xp-perf" />
				<param name="cfgKey" value="win32xp-perf" />
				<param name="markerName" value="eclipse-win32xp-perf-${buildId}" />
			</antcall>
			<antcall target="test">
				<param name="tester" value="${basedir}/win32xp2-perf" />
				<param name="cfgKey" value="win32xp2-perf" />
				<param name="markerName" value="eclipse-win32xp2-perf-${buildId}" />
			</antcall>
			<antcall target="test">
				<param name="tester" value="${basedir}/rhelws5-perf" />
				<param name="sleep" value="120" />
				<param name="cfgKey" value="rhelws5-perf" />
				<param name="markerName" value="eclipse-rhelws5-perf-${buildId}" />
			</antcall>
			<antcall target="test">
				<param name="tester" value="${basedir}/sled10-perf" />
				<param name="sleep" value="300" />
				<param name="cfgKey" value="sled10-perf" />
				<param name="markerName" value="eclipse-sled10-perf-${buildId}" />
			</antcall>

This invokes the tests in parallel on the performance test machines. In this case, there is a machine.cfg file in the same directory as the above file that maps the "cfgKey" value written above to the hostname of the machine. The tests are invoked on the windows machines via rsh and on the linux machines via ssh.

#Windows XP
win32xp-perf=epwin2
win32xp2-perf=epwin3

#RedHat Enterprise Linux WS 5
rhelws5-perf=eplnx2

#sled 10 
sled10-perf=eplnx1

This invokes all the tests in the org.eclipse.releng.eclipsebuilder\eclipse\buildConfigs\sdk.tests\testScripts\test.xml on each machine. If the test bundle has a performance target in the test.xml, the performance tests for that machine will run. The test scripts use the values in the (for example when running on window xp) org.eclipse.releng.eclipsebuilder\eclipse\buildConfigs\sdk.tests\testConfigs\win32xp2-perf\vm.properties which specifies the database to write to as well as the port, and url of that database.

When the performances tests complete, the results are generated.

<target name="generatePerformanceResults">
		<mkdir dir="${buildDirectory}/${buildLabel}/performance" />
		<mkdir dir="${postingDirectory}/${buildLabel}/performance" />
		<taskdef name="performanceResults" classname="org.eclipse.releng.performance.PerformanceResultHtmlGenerator" />
		<condition property="configArgs" value="-ws gtk -arch ppc">
			<equals arg1="${os.arch}" arg2="ppc64" />
		</condition>
		<condition property="configArgs" value="-ws gtk -arch x86">
			<equals arg1="${os.arch}" arg2="i386" />
		</condition>
		<property name="configArgs" value="" />

		<java jar="${basedir}/../org.eclipse.releng.basebuilder/plugins/org.eclipse.equinox.launcher.jar" fork="true" maxmemory="512m" error="${buildlogs}/perfgenerror.txt" output="${buildlogs}/perfgenoutput.txt">
			<arg line="${configArgs} -consolelog -nosplash -data ${buildDirectory}/perf-workspace -application org.eclipse.test.performance.ui.resultGenerator
						-current ${buildId}
						-jvm ${eclipse.perf.jvm}
						-print					    
						-output ${postingDirectory}/${buildLabel}/performance/
						-config eplnx1,eplnx2,epwin2,epwin3
			            -dataDir ${postingDirectory}/../../data/v38
						-config.properties ${eclipse.perf.config.descriptors}
						-scenario.pattern org.eclipse.%.test%" />
			<!-- baselines arguments are no longer necessary since bug https://bugs.eclipse.org/bugs/show_bug.cgi?id=209322 has been fixed...
						-baseline ${eclipse.perf.ref}
						-baseline.prefix R-3.4_200806172000
			-->
			<!-- add this argument to list above when there are milestone builds to highlight 
			-highlight.latest 3.3M1_
			-->
			<env key="LD_LIBRARY_PATH" value="${basedir}/../org.eclipse.releng.basebuilder" />
			<sysproperty key="eclipse.perf.dbloc" value="${dbloc}" />
		</java>

Two important things about generating performance results:

  1. xhost+ needs to be enabled in a terminal of the user generating the performance results
  2. The derby database needs to be running on port 1528 (There is an init script for that)
  3. X has to be open on the machine generating the results or you'll get the "SWT - no more handles error"

Why should I package plugins and features as jars?

See the Running Eclipse from JARs document from the Core team.

Debugging pde build with missing dependancies

Breakpoint in BuildTimeSite class, in missingPlugins method, set breakpoint In display view, state.getState().getResolverErrors(state.getBundle("org.eclipse.ui.workbench", null, false)) print the errors for the bundle in question.

If I add a new plugin to a build, how do I ensure that javadoc will be included in the build.

See the adding javadoc document.


Troubleshooting test failures

If tests pass in a dev workspace but fail in the automated test harness, check to make sure that your build.properties is exporting all the necessary items; check that plug-in dependencies are correct, since the test harness environment will only include depended-upon plug-ins; and check that file references do not depend on the current working directory, since that may be different in the test harness.

To debug tests in the context of the automated test harness, add the following element to the test.xml file in your plug-in's directory in the test harness.

<property
        name="extraVMargs" 
        value="-Xdebug -Xnoagent -Xrunjdwp:transport=dt_socket,server=y,suspend=y,address=8000 -Djava.compiler=NONE"/>

Then,

  • create a remote debugging launch target in your dev environment
  • put a breakpoint somewhere in your code
  • launch the tests from the command line, using the -noclean option to preserve the modified test.xml
  • launch the debug target

Troubleshooting tests that crash or time out, aka "(-1)DNF"

The table "Unit Test Results" on testResults.php sometimes shows "(-1)DNF" instead of (0) or the number of failing tests. This means the tests Did Not Finish, i.e., for some reason, no <test-suite-name>.xml file was produced.

Note that the absence of a DNF entry not always means that everything is alright! E.g. in bug 474161, one of the two SWT test suites was killed by a timeout, but since the other one passed, the testResults table currently doesn't show that (which needs to be fixed, see bug 210792).

To get more information about crashes and timeouts, consult the "Console Output Logs" logs.php. In the main logs (e.g. linux.gtk.x86_64_8.0_consolelog.txt), look for entries like this:

    [java] EclipseTestRunner almost reached timeout '7200000'.

and

    [java] Timeout: killed the sub-process

collect-results:
[junitreport] the file /Users/hudsonBuild/workspace/ep46I-unit-mac64/workarea/I20150805-2000/eclipse-testing/test-eclipse/Eclipse.app/Contents/Eclipse/org.eclipse.equinox.p2.tests.AutomatedTests.xml is empty.
[junitreport] This can be caused by the test JVM exiting unexpectedly

Search for keywords almost, killed, and JVM exiting unexpectedly to quickly find the relevant region in the console log.

Before the Ant task that drives the automated tests kills the test process, the EclipseTestRunner tries to produce a thread dump and take a screenshot (twice within 5 seconds). The stack traces end up in the *_consolelog.txt, and the screenshots are made available on the logs.php page, e.g.:

Screen captures for tests timing out on linux.gtk.x86_64_8.0
   timeoutScreens_org.eclipse.swt.tests.junit.AllBrowserTests_screen0.png

Also consult the "Individual * test logs" on the logs.php page (one .txt file per test suite). Stdout output goes into those files. Stderr output goes into the *_consolelog.txt.

Since Oxygen (4.7), the org.eclipse.test.performance bundle contains a class TracingSuite that can be used instead of the normal JUnit 4 Suite runner. Just define a test suite with @RunWith(TracingSuite.class), and you will get a message on System.out before each atomic test starts. The Javadoc of the class has all details.

Eclipse Release checklist

Where are the p2 update sites for the Eclipse Project?

See the list

How do I use the p2 zipped repos on the build page to provision my install or a pde target?

http://wiki.eclipse.org/Equinox/p2/Equinox_p2_zipped_repos

How to avoid breaking the build

Avoid breaking the build


How do you incorporate the p2MirrorURL into your repo at build time

See this excellent document p2.mirrorsURL written by Stephan Herrmann.

The p2.mirrorsURL should be added to your metadata so that p2 will see the list of available mirrors to choose during installation. Mirrors = less eclipse.org bandwidth utilization = happy Eclipse.org webmasters. Always try to keep the sysadmins happy is my motto.

Useful reference documents

Back to the top