How to average column data for multiple columns in text files?
I have a tab delimited data text file in the following format:
Depth Temp Salinity
0.30 28.30 31.90
0.30 28.30 31.90
0.30 28.20 31.90
0.30 28.20 31.90
0.40 28.20 32.00
0.40 28.00 32.00
0.50 28.00 31.90
0.60 28.00 32.00
0.70 27.90 32.00
0.60 27.90 32.10
What I want to a开发者_运维技巧chieve is to obtain any lines where there are duplicate values in the Depth column and put them into a list/s. Then from that list/s I will average these values for each column (not averaging the Depth column), sort the values by Depth, and then output all this back into the original data file format. So in the example file above, the output would be:
Depth Temp Salinity
0.30 28.25 31.90
0.40 28.10 32.00
0.50 28.00 31.90
0.60 27.95 32.05
0.70 27.90 32.00
I understand I need to use .readlines() to grab the relevant lines, but how do I only grab duplicate lines?
Thanks in advance!
You should use a dictionary where the key is the depth.
lines = [
"0.30 28.30 31.90",
"0.30 28.30 31.90",
"0.30 28.20 31.90",
"0.30 28.20 31.90",
"0.40 28.20 32.00",
"0.40 28.00 32.00",
"0.50 28.00 31.90",
"0.60 28.00 32.00",
"0.70 27.90 32.00",
"0.60 27.90 32.10"
]
dict = {}
for line in lines:
depth, temp, salinity = map(float, line.split())
old = (0,0,0)
if depth in dict: old = dict[depth]
dict[depth] = (old[0]+1, old[1]+temp, old[2]+salinity)
for key in dict:
tri = dict[key]
print(str(key) +" "+str(tri[1]/tri[0])+" "+str(tri[2]/tri[0]))
If you can afford to read the entire file into memory, itertools.groupby may simplify your code:
from itertools import groupby
lines = [map(float, line.split("\t")) for line in open('file.txt')]
print lines[0].strip() # print out header
key_fun = lambda(x):x[0]
sorted_lines = sorted(lines[1:], key=key_fun)
for k,g in groupby(sorted_lines, key=key_fun):
g = list(g)
mean_temp = sum(x[1] for x in g) / len(g)
mean_salinity = sum(x[2] for x in g) / len(g)
print "%f\t%f\t%f" % (k,mean_temp,mean_salinity)
use numpy can simplify calculation:
import numpy as np
with file("data.txt", "rb") as f:
titles = f.readline().strip().split()
data = np.loadtxt(f)
data = data[np.argsort(data[:, 0])]
split_index = np.where(np.diff(data[:,0])>0)[0]+1
print "\t".join(titles)
for a in np.split(data, split_index):
print "\t".join("%f" % x for x in np.average(a, axis=0))
精彩评论