开发者

find and replace within file

I have a requirement to search for a pattern which is something like :

timeouts = {default = 3.0; };

and replace it with

timeouts = {default = 3000.0;.... }开发者_运维技巧; 

i.e multiply the timeout by factor of 1000.

Is there any way to do this for all files in a directory

EDIT :

Please note that some of the files are symlinks in the directory.Is there any way to get this done for symlinks also ?

Please note that timeouts exists as a substring also in the files so i want to make sure that only this line gets replaced. Any solution is acceptable using sed awk perl .


Give this a try:

for f in *
do
    sed -i 's/\(timeouts = {default = [0-9]\+\)\(\.[0-9]\+;\)\( };\)/\1000\2....\3/' "$f" 
done

It will make the replacements in place for each file in the current directory. Some versions of sed require a backup extension after the -i option. You can supply one like this:

sed -i .bak ...

Some versions don't support in-place editing. You can do this:

sed '...' "$f" > tmpfile && mv tmpfile "$f"

Note that this is obviously not actually multiplying by 1000, so if the number is 3.1 it would become "3000.1" instead of 3100.0.


you can do this

perl -pi -e 's/(timeouts\s*=\s*\{default\s*=\s*)([0-9.-]+)/print $1; $2*1000/e' *


One suggestion for whichever solution above you decide to use - it may be worth it to think through how you could refactor to avoid having to modify all of these files for a change like this again.

  • Do all of these scripts have similar functionality?
  • Can you create a module that they would all use for shared subroutines?
  • In the module, could you have a single line that would allow you to have a multiplier?

For me, anytime I need to make similar changes in more than one file, it's the perfect time to be lazy to save myself time and maintenance issues later.


$ perl -pi.bak -e 's/\w+\s*=\s*{\s*\w+\s*=\s*\K(-?[0-9.]+)/sprintf "%0.1f", 1000 * $1/eg' *

Notes:

  • The regex matches just the number (see \K in perlre)
  • The /e means the replacement is evaluated
  • I include a sprintf in the replacement just in case you need finer control over the formatting
  • Perl's -i can operate on a bunch of files

EDIT

It has been pointed out that some of the files are shambolic links. Given that this process is not idempotent (running it twice on the same file is bad), you had better generate a unique list of files in case one of the links points to a file that appears elsewhere in the list. Here is an example with find, though the code for a pre-existing list should be obvious.

$ find -L . -type f -exec realpath {} \; | sort -u | xargs -d '\n' perl ...

(Assumes none of your filenames contain a newline!)

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜