开发者

Version control for other kinds of files? (non-source code files)

Version control in programming is pretty much a necessity, and some programs allow for things like change-tracking or a form version control for files other than source code (like MS Word, InDesign, etc.).

Is there any kind of system or architecture/protocol that could be put in place to establish version control for non-source code files of any arbitrary type, in any arbitrary networked/shared directory? [Running on Max OSX network]

(Not tracking individual changes made in开发者_StackOverflow社区 individual files (as that wouldn't be possible) but at least recording something like "John Doe has checked this file out at 12:01PM on 07-15-2010", so that we can track who is modifying the files, and people know not to work on it until it's checked back in.)

If no turn-key, software-based, solution exists, is there a better alternative?

Perhaps it would be possible to program some kind of version control system composed of Bash scripts/Apple scripts, Automator actions, and/or Finder plug-ins to add something like a "right click > check in/out file" command that syncs to a a custom version control program on a server?

Or any ideas how to prevent people from making changes to files that other people might be working on?

(one of the issues is people copying/pasting the file to their HDD and working on it, which prevents the OS from throwing the "File in use" error. And telling them not to copy files like that isn't an option.)

Background:

My company works with various kinds of digital files (ebooks, interactive whiteboard activity files, photographs, videos, etc.). We frequently have multiple people working on each project, and sometimes they copy files, overwrite changes made by others who were working on the same file at the same time (i.e. copied the file from a network share to their local HD, made the changes, then copied back to the network), keep old versions in the same directory as the working file, lose track of who has what file where (like people copying the file out of the shared project directory and into their personal shared folder, making changes, and moving it back and forth).


git is deliberately stupid. its' so dumb it's named for being dumb.

Git is so dumb that it doesn't know or care what format the files it tracks are using. It doesn't know how to fix them in a merge if two people edit the same file. (actually, it is that smart for a lot of types, but that's actually an extra module that is independent of the version tracking feature)

but it is smart enough to prevent you from overwriting changes when they do diverge. And it will also tell you who is to blame when that happens.

git doesn't manage a central repository, there's no such concept. in git, each person making changes has the complete repository on their local machine, and they pass changes to and from one-another. You can 'fake' a central repository by passing a change to an officially 'blessed' repository, on some convenient server.

So how does any of this answer your question?

Is there any kind of system or architecture/protocol that could be put in place to establish version control for non-source code files of any arbitrary type, in any arbitrary networked/shared directory? [Running on Max OSX network]

Or any ideas how to prevent people from making changes to files that other people might be working on?

Git doesn't know or care what format it's tracking. It treats all files the same. What you really seem to be concerned about is preventing one person from clobbering all over the changes of another person.

Git sidesteps that issue entirely. There is no official copy to clobber. If you are using a centralized server, and one user tries to push over a change that would overwrite a more recent change than that user has seen, the push fails. The user has to pull the new version, resolve the changes/conflicts and try again. Even if a user stubbornly just drops the old change on the floor and uploads his own without regard to the changes that occurred between his first pull and final push, no data gets lost, because git keeps everything, and a more responsible individual can cherry pick those files and fix it.


No SCM system is going to reasonably meet your needs. If you are working with that much of a variety of files, then you need to invest in some kind of a Digital Asset Management or Content Management System. The ability to check out (thereby locking out other users) is a typical feature, and revisions can be tracked, logged, and managed (users check out the file, download to edit it, and check in when they are done). DAM systems and CMSs are specifically designed to be used with the types of files you are working with as noted in your question.

One quick addition to my answer: There are a lot of DAM and CMS systems out there, so your best first step is to create an RFP of sorts breaking out your current environment, needs, internal rules, and goals into easily digestible chunks and send it out to a variety of vendors and assess their response.


Since your server supports FUSE, one low-pain low-gain possibility is to use CopyFS, which keeps a copy of every single version of each file. The main downsides are that it's unprincipled (it just keeps copies, it doesn't do anything against concurrent edits or store changelogs), and that it can be resource-hungry (each time you save, you get a new version). The major advantage is that it's fully automatic, so you don't need any user training.

Tortoise SVN and its counterparts for various other version control systems are fairly popular in the Windows world. They provide shell integration, not application integration though. On Mac OS X, google finds SCPlugin; I have no idea how well it works.


For an open-source Digital Asset Management system have a look at ResourceSpace.

http://www.colorhythm.com/prismpoint_FAQ.php

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜