开发者

What's the best way to track private files in a public Mercurial repository?

"If it’s not in source control, it doesn’t exist."

This question was addressed for Git here: Techniques to handle a private and public repository?. What about for Mercurial?

I have several public Bitbucket repos (with multiple committers) where I'd like the source to be public, but which load API, SSH keys and other sensitive info from untracked files. However this results in someone emailing around the new config file if we add a new Mailchimp or Hunch or Twilio API key. Is there a way to shi开发者_Go百科eld these files from public view somehow and still track them? Everyone is syncing their repo through Bitbucket.


There are two good ways to handle this (besides zerkms's solution, which doesn't offer the easy of synchronization you want, but is what I'd do anyway):

  1. Use Mercurial Queues. When you create a mercurial queue with hg qinit --create-repo it creates an overlay system that can be qpushed atop the existing repo. So you keep your secrets in queues and qpush them when you need them, and qpop them when you don't. With --create-repo the set of overlays (patches) is handled in a repository of its own. So people in the know can push/pull the secret overlay repo and people w/o access to it can use the base repo. The patch repo can be a private repo on bitbucket or hosted elsewhere.

    or

  2. Use a subrepo exactly as described in the git solution.


Create filename.ext.sample files with templates inside (probably filled with dummy data), which need to be copied and filled with actual data in the particular working directory.

That is what I usually do ;-)


Zerkms' solution is fast, easy, and clean, and likely your best bet for preventing secure content from being tracked / published; however as you say, "If it’s not in source control, it doesn’t exist." I find that far more often what I'm trying to keep out of source control is not a security concern, but simply a configuration setting. I believe these should be tracked, and my current employer has a rather clever setup for dealing with this, which I'll attempt to simplify / generalize / summarize here.

REPOSITORY
  code/
    ...
  scripts/
    configparse.sh
    ...
  config/
    common.conf
    env/
      development.conf
      testing.conf
      production.conf
    users/
      dimo414.conf
      mycoworker.conf
      ...
    hosts/
      dimo414-laptop.conf
      dimo414-server.conf
      mycoworker-laptop.conf
      ...
    local.conf*
  makefile
  .conf*

* untracked file

Hopefully the idea here is pretty clear, we define settings at each appropriate level, enabling highly granular control of the codebase's behavior in a logical and consistent fashion.

The scripts/configparse.sh script reads all the necessary configuration files in turn and builds .conf out of all the settings it finds.

  • config/common.conf is the starting point, and contains logical default values for every setting. Many will likely get overwritten, but something is specified here. It's an error for a setting to be found in another file that isn't first set in common.conf.
  • config/env/ controls the behavior in different environments, doing things like pointing to the correct database servers.
  • config/users/ looks for a $USER.conf file, useful for setting things I care about, such as increasing the logging level for aspects my team works on, or customizing behavior I prefer to use across all my machines.
  • config/hosts does the same for machines, looking for $HOSTNAME.conf. Useful for machine-specific settings like application paths or data directories.
  • config/local.conf is an untracked file, and lets you set checkout-specific values and/or content you don't want in version control.

The aggregate of all these settings is output to .conf, which is what the rest of the codebase looks for when loading settings.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜