开发者

How can I publish static web resources to Amazon S3 using Hudson/Jenkins and Maven?

I'd like to be able to deploy static web resources (jpgs, css, that sort of thing) to Amazon S3, as they won't be being served by the开发者_JAVA百科 same server as my main webapp.

I use Jenkins (FKA Hudson) and Maven to build a Java webapp .WAR file and then upload it to a Tomcat instance using the Jenkins "Deploy to container" plugin.

I really want the static assets to be deployed as part of the main build process, but I've no idea the best way to get them to S3. I've seen Hudson/Jenkins plugins that copy artifacts, but that would only be my .WAR file and not the files inside the project.

Any ideas on a 'nice' way to do this? Should I be doing this with a Maven plugin instead of a Hudson/Jenkins one?


This is how I do it: Use an external program, such as s3cmd to do the job. You would simply specify a shell script build step like this

#!/bin/sh

s3cmd --guess-mime-type -P sync $WORKSPACE/src/main/resources s3://your-bucket-name/some/path

You can probably integrate this in your pom.xml and call it from there (so this part of your deployment process is under version control).


Turned out I didn't need to do this. We were always planning on using CloudFront for distribution, and recently AWS have allowed you to specify a 'custom origin' for CloudFront distributions. This means that static assets can be deployed along with the rest of the .war contents, and then a CloudFront distribution pointed at that application.


I created a s3-webcache-maven-plugin, that uploads images, javascript, css and any other static resources from src/main/webapp to a given S3 Bucket, and the sources are available at https://github.com/aro1976/aws-parent.

In addition, it creates a manifest called WEB-INF/s3-webcache.xml, that could be used by a servlet filter to redirect requests from your Web Server to S3 or CloudForge.

You need to place the following configuration into <build><plugins>:

<plugin>
  <groupId>br.com.dynamicflow.aws</groupId>
  <artifactId>s3-webcache-maven-plugin</artifactId>
  <version>0.0.2-SNAPSHOT</version>
  <configuration>
    <accessKey>${s3.accessKey}</accessKey>
    <secretKey>${s3.secretKey}</secretKey>
    <bucketName>${s3.bucketName}</bucketName>
    <hostName>${cloudForge.cname}</hostName><!-- hostName is optional -->
    <includes>
      <include>**/*.gif</include>
      <include>**/*.jpg</include>
      <include>**/*.tif</include>
      <include>**/*.png</include>
      <include>**/*.pdf</include>
      <include>**/*.swf</include>
      <include>**/*.eps</include>
      <include>**/*.js</include>
      <include>**/*.css</include>
    </includes>
    <excludes>
      <exclude>WEB-INF/**</exclude>
    </excludes>
    </configuration>
    <executions>
    <execution>
      <goals>
          <goal>upload</goal>
      </goals>
      <phase>prepare-package</phase>
    </execution>
    </executions>
</plugin>

The configuration parameters such as includes and excludes are required at this time, and you can use traditional maven regex.

The file names stored at S3 are replaced by their SHA-256 digest, in order to allow very long cache-headers and multi war otimization, that's why I created the WebCacheFilter, which is very simple and translates the traditional file names with the SHA-256 digest counterpart.

Check the Example Project at https://github.com/aro1976/aws-parent/tree/aws-parent-0.0.1/aws-examples/s3-webcache-example, specially the files pom.xml (with the plugin configuration) and web.xml (with the filter configuration).


I'd suggest the aws cli. You can easily install it from pip on most platforms.

Syncing to CloudFront is as simple as syncing your S3 bucket:

aws s3 sync your-local-dir/ s3://your-bucket --acl "public-read"

The public flag makes the assets world readable.

Rather than go through plugins, you should be able to just add the above as a build step in your configuration.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜