What do divide into different jobs when files created in job A are needed in job B? (Example is building a .NET application using MSBuild)
I'm thinking about the best way to structure jobs in Hudson, and what to divide into jobs. I'll use a .NET application as an example as that is what I am working on now, but I think a lot of the ideas are generic.
These are the steps which I want to perform, without thinking about dividing things into jobs but still thinking about what the dependencies are: ( I hope you understand my notation, <- means depends on and [X] = aaaaa means that aaaaa is a description of task [X]. )
- [C] = Check out the project, using Mercurial in this case.
- [C] <- [S] = Run StyleCop on the source files to make sure they comply with our coding standard.
- [C] <- [D] = Create documentation from our project using DoxyGen or Sandcastle.
- [C] <- [O] = Run the code tasks plugin to get a nice presentation of our TODO etc comments.
- [C] <- [B] = Build the solution using MSBuild with the Release target. The result in this case will be library files compiled to DLL assembly files. We would like to archive these artifacts.
- [B] <- [T] = Run NUnit tests on the library files.
- [B] <- [F] = Use FxCop to get some nice static code analysis from the library files.
- [B] <- [W] = Use the compiler warnings plugin on the build log to extract all warnings given during the compilation.
- [D], [B] <- [R] = Release, create a release archive and upload it to a server.
If I split all of these up into different jobs:
- How should I get the checked out source code which I got in step [C] in step [S], [D], [O], [B] which all need the source code?
- How should I get the MSBuild log file in step [W] which was generated in step [B]?
- How do I get the resulting DLL artifacts generated in step [B] in step step [T] and [F] who both needs them?
My main problem if I split all the steps up into different projects is how to get these things, these files, between the different projects in a nice manner (I could of course turn to hard coding file paths, but that seems inflexible, but I might be wrong).
On the other hand, if I do spli开发者_StackOverflowt them into different projects I get less complexity for each project than I would if I crammed all these steps into a single project. It might be hard to maintain if I have that many things in one project. And I would also not be able to run disjunct projects in parallel which I guess would speed up the whole process.
I have a different understanding of the 'job'. In my case I'm using Hudson for building several projects, for some projects I've more than one job, but not for the same reasons you describe above.
I use a project building tool like Ant or Maven to make some very specific steps of my build like your [O] or [D] tasks for example. For the more generic steps I use hudson plugins that handle this processes, like running unit tests, deploying artifacts.
I think you will find many of this plugins to be cross language.
However Hudson it's an amazing an powerful tool for continuous integration, I can say that the hard stuff it's done by Maven and it's plugins. Code coverage reports, findbugs reports, project site generation, javadoc generation, byte code instrumentation are a few of the tasks I rely on Maven to do.
So, I use different jobs when I want a different final objective for each build not for make a chain of elements I want to be the final artifact set.
For example, I have a job for hourly build my app and create email reports in case of any errors, and I've a second job for the same project that generates a release of that project, this one it's called manually and I use it to generate all the docs, reports and artifacts that I have to assemble in order to have a stable release of my project.
Hope my view of Hudson use helps.
You list quite a few tasks for your job. It usually does not make sense to have one job for each task. It makes more sense to group them. For instance, in my experience it doesn't buy you anything to have a separate job for checkout.
Remember, more jobs make a build process more brittle and makes maintaining them harder and more complex. So first set your goals/strategy and then divide the the build process in individual jobs.
The philosophy I am pursuing is frequent checkins to the repository as well as every checkin should not break the build. This means I need to make sure that the developer gets fast feedback after a checkin. So I need one job C,B,T,W as well as S in this order. If you prefer, you can also run O and F with this job. What does this order buy you. You get a fast feedback on your most important item, did the code compile. The second most important item is, whether the unit tests do what they are supposed to do (unit tests). Then you test against the less important items (compiler warnings and coding standards). After that you can run your statistics. Personally, I would run O (ToDos) and F (code analysis) in the nightly build which runs a whole release. But you can also run the whole release with every checkin.
I would only separate the build/release process into smaller steps if the artifacts are needed faster. For me it is usually acceptable when the job runs for up to 15 minutes. Why? Because I get a fast feedback if it breaks (that could be less than 2 mintes), since the job stops here and does not run the other (now useless) tasks. Sometimes I run jobs parallel. For parallel execution and when splitting a job I mostly used standard dependencies ("jobs to build after ...") so far, to trigger dependent projects, but mostly I use the parametrized trigger plugin. I increasingly also use the join plugin to run some steps in parallel but can only go on if both parts completed.
To pass files between two jobs I used to use an external repository (just a shared directory on Windows) and passes the path to the files as a parameter to the next job. I switched the behavior and use now the archive artifact function of Hudson and pass the job-run URL to the next job, to download them through HTTP. This removes the technical problems of mounting Windows shares on Unix (even though CIFS does a pretty good job). In addition you can use the Clone Workspace SCM plugin, which helps if you need the whole workspace in other jobs.
精彩评论