Why do Python programmers still use old-style division?
I started using Python in 2001. I loved the simplicity of the language, but one feature that annoyed the heck out of me was the /
operator, which would bite me in subtle places like开发者_如何学Go
def mean(seq):
"""
Return the arithmetic mean of a list
(unless it just happens to contain all ints)
"""
return sum(seq) / len(seq)
Fortunately, PEP 238 had already been written, and as soon as I found about the new from __future__ import division
statement, I started religiously adding it to every .py file I wrote.
But here it is nearly 9 years later and I still frequently see Python code samples that use /
for integer division. Is //
not a widely-known feature? Or is there actually a reason to prefer the old way?
I think //
for truncation is reasonably well known, but people are reluctant to "import from the future" in every module they write.
The classic approach (using float(sum(seq))
and so on to get a float result, int(...)
to truncate an otherwise-float division) is the one you'd use in C++ (and with slightly different syntax in C, Java, Fortran, ...), as well as "natively" in every Python release through 2.7 included, so it's hardly surprising if a lot of people are very used to, and comfortable with, said "classic approach".
When Python 3 becomes more widespread and widely used than Python 2, I expect things to change.
You can change the behaviour of /
for the whole interpreter if you prefer, so it's not necessary to import from future in every module
$ python -c "print 1/2"
0
$ python -Qwarn -c "print 1/2"
-c:1: DeprecationWarning: classic int division
0
$ python -Qnew -c "print 1/2"
0.5
To me it's always made sense that dividing using ints gives and int result in a language that doesn't normally do on the fly type conversions.
精彩评论