开发者

Is there a more efficient method to process large amounts of data than through arrays?

Hej there, I am writing a data aquisition and analysis software to a physical measurement set up with Python. In the process I gather massive amounts of data points (easily in the order of 1.000.000 or more) which I subsequently will analyze. So far I am using arrays of float numbers, which in principle do the job. However, I am getting strange effects on the aquired data as I use more and more data points per measurement, which makes me wonder wether the handling of the arrays is so inefficient, that writing into them makes for a significant time delay in the data aquisition loop.

Is that a possibility? Do you have any suggestions about how to improve the handling time in the writing process (it is a matter of microseconds) or is that not a possible influence and I need to开发者_如何转开发 look somewhere else?

Thanks in advance!


Do you mean lists? You can use NumPy to handle numerical arrays efficient and performant.

From the NumyPy website:

First of all, they are great for performing calculation relying heavily on mathematical and numerical operations. They can work natively with matrices and arrays, perform operations on them, find eigenvectors, compute integrals, solve differential equations.

NumPy’s array class (which is used to implement the matrix class) is implemented with speed in mind, so accessing NumPy arrays is faster than accessing Python lists. Further, NumPy implements an array language, so that most loops are not needed.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜