开发者

Delete blank rows from CSV?

I have a large csv file in which some rows are entirely blank. How do I use Python to delete all blank rows from the csv?

After all your suggestions, this is what I have so far

impor开发者_运维问答t csv

# open input csv for reading
inputCSV = open(r'C:\input.csv', 'rb')

# create output csv for writing
outputCSV = open(r'C:\OUTPUT.csv', 'wb')

# prepare output csv for appending
appendCSV = open(r'C:\OUTPUT.csv', 'ab')

# create reader object
cr = csv.reader(inputCSV, dialect = 'excel')

# create writer object
cw = csv.writer(outputCSV, dialect = 'excel')

# create writer object for append
ca = csv.writer(appendCSV, dialect = 'excel')

# add pre-defined fields
cw.writerow(['FIELD1_','FIELD2_','FIELD3_','FIELD4_'])

# delete existing field names in input CSV
# ???????????????????????????

# loop through input csv, check for blanks, and write all changes to append csv
for row in cr:
    if row or any(row) or any(field.strip() for field in row):
        ca.writerow(row)

# close files
inputCSV.close()
outputCSV.close()
appendCSV.close()

Is this ok or is there a better way to do this?


Use the csv module:

import csv
...

with open(in_fnam, newline='') as in_file:
    with open(out_fnam, 'w', newline='') as out_file:
        writer = csv.writer(out_file)
        for row in csv.reader(in_file):
            if row:
                writer.writerow(row)

If you also need to remove rows where all of the fields are empty, change the if row: line to:

if any(row):

And if you also want to treat fields that consist of only whitespace as empty you can replace it with:

if any(field.strip() for field in row):

Note that in Python 2.x and earlier, the csv module expected binary files, and so you'd need to open your files with e 'b' flag. In 3.x, doing this will result in an error.


Surprised that nobody here mentioned pandas. Here is a possible solution.

import pandas as pd
df = pd.read_csv('input.csv')
df.to_csv('output.csv', index=False)


Delete empty row from .csv file using python

    import csv
  ...


 with open('demo004.csv') as input, open('demo005.csv', 'w', newline='') as output:
     writer = csv.writer(output)
     for row in csv.reader(input):
         if any(field.strip() for field in row):
             writer.writerow(row)

Thankyou


You have to open a second file, write all non blank lines to it, delete the original file and rename the second file to the original name.

EDIT: a real blank line will be like '\n':

for line in f1.readlines():
    if line.strip() == '':
        continue
    f2.write(line)

a line with all blank fields would look like ',,,,,\n'. If you consider this a blank line:

for line in f1.readlines():
    if ''.join(line.split(',')).strip() == '':
        continue
    f2.write(line)

openning, closing, deleting and renaming the files is left as an exercise for you. (hint: import os, help(open), help(os.rename), help(os.unlink))

EDIT2: Laurence Gonsalves brought to my attention that a valid csv file could have blank lines embedded in quoted csv fields, like 1, 'this\n\nis tricky',123.45. In this case the csv module will take care of that for you. I'm sorry Laurence, your answer deserved to be accepted. The csv module will also address the concerns about a line like "","",""\n.


Doing it with pandas is very simple. Open your csv file with pandas:

import pandas as pd
df = pd.read_csv("example.csv")
#checking the number of empty rows in th csv file
print (df.isnull().sum())
#Droping the empty rows
modifiedDF = df.dropna()
#Saving it to the csv file 
modifiedDF.to_csv('modifiedExample.csv',index=False)


python code for remove blank line from csv file without create another file.

def ReadWriteconfig_file(file):

try:
    file_object = open(file, 'r')
    lines = csv.reader(file_object, delimiter=',', quotechar='"')
    flag = 0
    data=[]
    for line in lines:
        if line == []:
            flag =1
            continue
        else:
            data.append(line)
    file_object.close()
    if flag ==1: #if blank line is present in file
        file_object = open(file, 'w')
        for line in data:
            str1 = ','.join(line)
            file_object.write(str1+"\n")
        file_object.close() 
except Exception,e:
    print e


Here is a solution using pandas that removes blank rows.

 import pandas as pd
 df = pd.read_csv('input.csv')
 df.dropna(axis=0, how='all',inplace=True)
 df.to_csv('output.csv', index=False)


I need to do this but not have a blank row written at the end of the CSV file like this code unfortunately does (which is also what Excel does if you Save-> .csv). My (even simpler) code using the CSV module does this too:

import csv

input = open("M51_csv_proc.csv", 'rb')
output = open("dumpFile.csv", 'wb')
writer = csv.writer(output)
for row in csv.reader(input):
    writer.writerow(row)
input.close()
output.close() 

M51_csv_proc.csv has exactly 125 rows; the program always outputs 126 rows, the last one being blank.

I've been through all these threads any nothing seems to change this behaviour.


In this script all the CR / CRLF are removed from a CSV file then has lines like this:

"My name";mail@mail.com;"This is a comment.
Thanks!"

Execute the script https://github.com/eoconsulting/lr2excelcsv/blob/master/lr2excelcsv.py

Result (in Excel CSV format):

"My name",mail@mail.com,"This is a comment. Thanks!"


Replace the PATH_TO_YOUR_CSV with your

import pandas as pd

df = pd.read_csv('PATH_TO_YOUR_CSV')
new_df = df.dropna()
df.dropna().to_csv('output.csv', index=False)

or in-line:

import pandas as pd

pd.read_csv('data.csv').dropna().to_csv('output.csv', index=False)


I had the same, problem.

I converted the .csv file to a dataframe and after that I converted the dataframe back to the .csv file.

The initial .csv file with the blank lines was the 'csv_file_logger2.csv' .

So, i do the following process

import csv
import pandas as pd
df=pd.read_csv('csv_file_logger2.csv')

df.to_csv('out2.csv',index = False)
0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜