开发者

Jenkins - Using the results of one job, in another job

I have a job that runs a makefile开发者_运维知识库 and generates some files. I then want another job that publishes these files to ivy.

I am aware of the clone workspace plugin, but is there any other options?


You run a Makefile, and you're publishing to Ivy?

Ivy is part of Ant, and is a module that takes advantage of the worldwide Maven repository structure to grab required jarfiles and other dependencies.

Don't get me wrong, I've used a local remote Maven repository to publish C/C++ libraries (you can use wget to fetch the items) that other projects will depend upon. But, I didn't do that using Ivy.

If you're thinking of Apache Ivy, then you can publish using Maven. There's a Maven Release plugin that will copy your artifact to your Maven repository, but what you probably want to do is deploy.

In my Jenkins builds, I simply had Jenkins execute maven's deploy-file step from the command line. This allowed me to deploy files into my Maven Ivy repository without having to first create a pom.xml file. (Well, you want to create a pom.xml anyway because you want to include a dependency hierarchy.)

I usually did this in the same job as the job that created my jar/war/ear file. However, if you want a separate job to do this, you can use the Copy Artifact Plugin. This plugin allows Job B to copy any or all of the published artifacts from Job A. That's a lot faster and simpler than cloning a whole workspace if you just want the built jar files.


My personal preference is to do these sort of things without relying on the Jenkins internal file structure, though sometimes this mean knowing about the internal structure of your other build tools (e.g., Maven, or in your case Ivy).

If I were you, I'd do everything in one job - i.e., build, and then have an "Ivy Publisher" (if such a plug in exists) publish the artifact to the remote Ivy repository.

If that's not possible, have the first job "install" the artifact into the local repository/cache (I'm not sure what it's called on Ivy), and then have the second job pick it up from there.

I'm not sure this is necessarily the best approach, but it has worked well for me.

Edit I should mention - this doesn't work so well on distributed environments, unless like me, your distributed environment consists of multiple nodes that have access to a common NAS filesystem.

Edit 2 I have also used the Copy To Slave Plugin for distributed environments without a common filesystem.


You have several options, one is Clone Workspace, which works fairly well, but doubles the disk space needed (which in our case is quite relevant). Most other ways are a variation of Clone Workspace.

What I have done instead is to use custom workspace locations. I.e. my first job builds everything then triggers a second job. In the second job I have set the custom workspace to the workspace of the first job, so the job performs other tasks on the same files. You have to check the option to prevent a build of the first job while the second job is running though, as both work on the same files, which is kind of a fine line.

However, if you need it and are careful, this can be a viable solution.


Use the Copy Artifact plugin to copy artifacts from Job A (compile) to Job B (publish).


I'd go with a master build file that handles both of the sub tasks. Ant has a set of Execution tasks that you can use to run another Ant build file, execute some command line commands, etc. Look here:

http://ant.apache.org/manual/tasksoverview.html

Perhaps you could kick off the make with an Exec command and handle the ivy publish by running an Ant build with the Ant command.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜