开发者

How can I delete a newline if it is the last character in a file?

I have some files that I'd like to delete the last newline if it is the last character in a file. od -c shows me that the command I run does write the file with a trailing new line:

0013600   n   t  >  \n

I've tried a few tricks with sed but the best I 开发者_Go百科could think of isn't doing the trick:

sed -e '$s/\(.*\)\n$/\1/' abc

Any ideas how to do this?


perl -pe 'chomp if eof' filename >filename2

or, to edit the file in place:

perl -pi -e 'chomp if eof' filename

[Editor's note: -pi -e was originally -pie, but, as noted by several commenters and explained by @hvd, the latter doesn't work.]

This was described as a 'perl blasphemy' on the awk website I saw.

But, in a test, it worked.


You can take advantage of the fact that shell command substitutions remove trailing newline characters:

Simple form that works in bash, ksh, zsh:

printf %s "$(< in.txt)" > out.txt

Portable (POSIX-compliant) alternative (slightly less efficient):

printf %s "$(cat in.txt)" > out.txt

Note:

  • If in.txt ends with multiple newline characters, the command substitution removes all of them.Thanks, Sparhawk (It doesn't remove whitespace characters other than trailing newlines.)
  • Since this approach reads the entire input file into memory, it is only advisable for smaller files.
  • printf %s ensures that no newline is appended to the output (it is the POSIX-compliant alternative to the nonstandard echo -n; see http://pubs.opengroup.org/onlinepubs/009696799/utilities/echo.html and https://unix.stackexchange.com/a/65819)

A guide to the other answers:

  • If Perl is available, go for the accepted answer - it is simple and memory-efficient (doesn't read the whole input file at once).

  • Otherwise, consider ghostdog74's Awk answer - it's obscure, but also memory-efficient; a more readable equivalent (POSIX-compliant) is:

  • awk 'NR > 1 { print prev } { prev=$0 } END { ORS=""; print }' in.txt

  • Printing is delayed by one line so that the final line can be handled in the END block, where it is printed without a trailing \n due to setting the output-record separator (OFS) to an empty string.

  • If you want a verbose, but fast and robust solution that truly edits in-place (as opposed to creating a temp. file that then replaces the original), consider jrockway's Perl script.


You can do this with head from GNU coreutils, it supports arguments that are relative to the end of the file. So to leave off the last byte use:

head -c -1

To test for an ending newline you can use tail and wc. The following example saves the result to a temporary file and subsequently overwrites the original:

if [[ $(tail -c1 file | wc -l) == 1 ]]; then
  head -c -1 file > file.tmp
  mv file.tmp file
fi

You could also use sponge from moreutils to do "in-place" editing:

[[ $(tail -c1 file | wc -l) == 1 ]] && head -c -1 file | sponge file

You can also make a general reusable function by stuffing this in your .bashrc file:

# Example:  remove-last-newline < multiline.txt
function remove-last-newline(){
    local file=$(mktemp)
    cat > $file
    if [[ $(tail -c1 $file | wc -l) == 1 ]]; then
        head -c -1 $file > $file.tmp
        mv $file.tmp $file
    fi
    cat $file
}

Update

As noted by KarlWilbur in the comments and used in Sorentar's answer, truncate --size=-1 can replace head -c-1 and supports in-place editing.


head -n -1 abc > newfile
tail -n 1 abc | tr -d '\n' >> newfile

Edit 2:

Here is an awk version (corrected) that doesn't accumulate a potentially huge array:

awk '{if (line) print line; line=$0} END {printf $0}' abc


gawk

awk '{q=p;p=$0}NR>1{print q}END{ORS = ""; print p}' file


A fast solution is using the gnu utility truncate:

[ -z $(tail -c1 file) ] && truncate -s-1 file

The test will be true if the file does have a trailing new line.

The removal is very fast, truly in place, no new file is needed and the search is also reading from the end just one byte (tail -c1).


A very simple method for single-line files, requiring GNU echo from coreutils:

/bin/echo -n $(cat $file)


If you want to do it right, you need something like this:

use autodie qw(open sysseek sysread truncate);

my $file = shift;
open my $fh, '+>>', $file;
my $pos = tell $fh;
sysseek $fh, $pos - 1, 0;
sysread $fh, my $buf, 1 or die 'No data to read?';

if($buf eq "\n"){
    truncate $fh, $pos - 1;
}

We open the file for reading and appending; opening for appending means that we are already seeked to the end of the file. We then get the numerical position of the end of the file with tell. We use that number to seek back one character, and then we read that one character. If it's a newline, we truncate the file to the character before that newline, otherwise, we do nothing.

This runs in constant time and constant space for any input, and doesn't require any more disk space, either.


Here is a nice, tidy Python solution. I made no attempt to be terse here.

This modifies the file in-place, rather than making a copy of the file and stripping the newline from the last line of the copy. If the file is large, this will be much faster than the Perl solution that was chosen as the best answer.

It truncates a file by two bytes if the last two bytes are CR/LF, or by one byte if the last byte is LF. It does not attempt to modify the file if the last byte(s) are not (CR)LF. It handles errors. Tested in Python 2.6.

Put this in a file called "striplast" and chmod +x striplast.

#!/usr/bin/python

# strip newline from last line of a file


import sys

def trunc(filename, new_len):
    try:
        # open with mode "append" so we have permission to modify
        # cannot open with mode "write" because that clobbers the file!
        f = open(filename, "ab")
        f.truncate(new_len)
        f.close()
    except IOError:
        print "cannot write to file:", filename
        sys.exit(2)

# get input argument
if len(sys.argv) == 2:
    filename = sys.argv[1]
else:
    filename = "--help"  # wrong number of arguments so print help

if filename == "--help" or filename == "-h" or filename == "/?":
    print "Usage: %s <filename>" % sys.argv[0]
    print "Strips a newline off the last line of a file."
    sys.exit(1)


try:
    # must have mode "b" (binary) to allow f.seek() with negative offset
    f = open(filename, "rb")
except IOError:
    print "file does not exist:", filename
    sys.exit(2)


SEEK_EOF = 2
f.seek(-2, SEEK_EOF)  # seek to two bytes before end of file

end_pos = f.tell()

line = f.read()
f.close()

if line.endswith("\r\n"):
    trunc(filename, end_pos)
elif line.endswith("\n"):
    trunc(filename, end_pos + 1)

P.S. In the spirit of "Perl golf", here's my shortest Python solution. It slurps the whole file from standard input into memory, strips all newlines off the end, and writes the result to standard output. Not as terse as the Perl; you just can't beat Perl for little tricky fast stuff like this.

Remove the "\n" from the call to .rstrip() and it will strip all white space from the end of the file, including multiple blank lines.

Put this into "slurp_and_chomp.py" and then run python slurp_and_chomp.py < inputfile > outputfile.

import sys

sys.stdout.write(sys.stdin.read().rstrip("\n"))


Yet another perl WTDI:

perl -i -p0777we's/\n\z//' filename


$  perl -e 'local $/; $_ = <>; s/\n$//; print' a-text-file.txt

See also Match any character (including newlines) in sed.


perl -pi -e 's/\n$// if(eof)' your_file


Using dd:

file='/path/to/file'
[[ "$(tail -c 1 "${file}" | tr -dc '\n' | wc -c)" -eq 1 ]] && \
    printf "" | dd  of="${file}" seek=$(($(stat -f "%z" "${file}") - 1)) bs=1 count=1
    #printf "" | dd  of="${file}" seek=$(($(wc -c < "${file}") - 1)) bs=1 count=1


Assuming Unix file type and you only want the last newline this works.

sed -e '${/^$/d}'

It will not work on multiple newlines...

* Works only if the last line is a blank line.


This is a good solution if you need it to work with pipes/redirection instead of reading/output from or to a file. This works with single or multiple lines. It works whether there is a trailing newline or not.

# with trailing newline
echo -en 'foo\nbar\n' | sed '$s/$//' | head -c -1

# still works without trailing newline
echo -en 'foo\nbar' | sed '$s/$//' | head -c -1

# read from a file
sed '$s/$//' myfile.txt | head -c -1

Details:

  • head -c -1 truncates the last character of the string, regardless of what the character is. So if the string does not end with a newline, then you would be losing a character.
  • So to address that problem, we add another command that will add a trailing newline if there isn't one: sed '$s/$//' . The first $ means only apply the command to the last line. s/$// means substitute the "end of the line" with "nothing", which is basically doing nothing. But it has a side effect of adding a trailing newline is there isn't one.

Note: Mac's default head does not support the -c option. You can do brew install coreutils and use ghead instead.


Yet another answer FTR (and my favourite!): echo/cat the thing you want to strip and capture the output through backticks. The final newline will be stripped. For example:

# Sadly, outputs newline, and we have to feed the newline to sed to be portable
echo thingy | sed -e 's/thing/sill/'

# No newline! Happy.
out=`echo thingy | sed -e 's/thing/sill/'`
printf %s "$out"

# Similarly for files:
file=`cat file_ending_in_newline`
printf %s "$file" > file_no_newline


ruby:

ruby -ne 'print $stdin.eof ? $_.strip : $_'

or:

ruby -ane 'q=p;p=$_;puts q if $.>1;END{print p.strip!}'


POSIX SED:

'${/^$/d}'

$ - match last line


{ COMMANDS } - A group of commands may be enclosed between { and } characters. This is particularly useful when you want a group of commands to be triggered by a single address (or address-range) match.


The only time I've wanted to do this is for code golf, and then I've just copied my code out of the file and pasted it into an echo -n 'content'>file statement.


sed ':a;/^\n*$/{$d;N;};/\n$/ba' file


I had a similar problem, but was working with a windows file and need to keep those CRLF -- my solution on linux:

sed 's/\r//g' orig | awk '{if (NR>1) printf("\r\n"); printf("%s",$0)}' > tweaked


sed -n "1 x;1 !H
$ {x;s/\n*$//p;}
" YourFile

Should remove any last occurence of \n in file. Not working on huge file (due to sed buffer limitation)


Here's a simple solution that uses sed. Your versions of sed needs to support the -z option.

       -z, --null-data

              separate lines by NUL characters

It can either be used in a pipe or used to edit the file in place with the -i option

sed -ze 's/\n$//' file
0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜