How to implement python to find value between xml tags?
I am using google site to retrieve weather information , I want to find values between XML tags. Following code give me weather condition of a city , but I am unable to obtain other parameters such as temperature and if possible explain working of split function implied in the code:
import urllib
def ge开发者_StackOverflowtWeather(city):
#create google weather api url
url = "http://www.google.com/ig/api?weather=" + urllib.quote(city)
try:
# open google weather api url
f = urllib.urlopen(url)
except:
# if there was an error opening the url, return
return "Error opening url"
# read contents to a string
s = f.read()
# extract weather condition data from xml string
weather = s.split("<current_conditions><condition data=\"")[-1].split("\"")[0]
# if there was an error getting the condition, the city is invalid
if weather == "<?xml version=":
return "Invalid city"
#return the weather condition
return weather
def main():
while True:
city = raw_input("Give me a city: ")
weather = getWeather(city)
print(weather)
if __name__ == "__main__":
main()
Thank You
USE
A
PARSER
You can't parse XML using regex(es), so don't try. Here's a start to finding an XML parser in Python. Here's a good site for learning about parsing XML in Python.
UPDATE: Given the new info about PyS60, here's the documentation for using XML from Nokia's website.
UPDATE 2: @Nas Banov has requested sample code, so here it is:
import urllib
from xml.parsers import expat
def start_element_handler(name, attrs):
"""
My handler for the event that fires when the parser sees an
opening tag in the XML.
"""
# If we care about more than just the temp data, we can extend this
# logic with ``elif``. If the XML gets really hairy, we can create a
# ``dict`` of handler functions and index it by tag name, e.g.,
# { 'humidity': humidity_handler }
if 'temp_c' == name:
print "The current temperature is %(data)s degrees Celsius." % attrs
def process_weather_conditions():
"""
Main logic of the POC; set up the parser and handle resource
cleanup.
"""
my_parser = expat.ParserCreate()
my_parser.StartElementHandler = start_element_handler
# I don't know if the S60 supports try/finally, but that's not
# the point of the POC.
try:
f = urllib.urlopen("http://www.google.com/ig/api?weather=30096")
my_parser.ParseFile(f)
finally:
f.close()
if __name__ == '__main__':
process_weather_conditions()
I would suggest using an XML Parser, just like Hank Gay suggested. My personal suggestion would be lxml, as I'm currently using it on a project and it extends the very usable ElementTree interface already present in the standard lib (xml.etree).
Lxml includes added support for xpath, xslt, and various other features lacking in the standard ElementTree module.
Regardless of which you choose, an XML parser is by far the best option, as you'll be able to deal with the XML document as a Python object. This means your code would be something like:
# existing code up to...
s = f.read()
import lxml.etree as ET
tree = ET.parse(s)
current = tree.find("current_condition/condition")
condition_data = current.get("data")
weather = condition_data
return weather
XML is structured data. You can do much better than using string manipulation to fetch data out of it. There are the sax, dom and elementree modules in the standard library as well as the high quality lxml library which can do your work for you in a much more reliable fashion.
Well, here goes - a non-full parser solution for your particular case:
import urllib
def getWeather(city):
''' given city name or postal code,
return dictionary with current weather conditions
'''
url = 'http://www.google.com/ig/api?weather='
try:
f = urllib.urlopen(url + urllib.quote(city))
except:
return "Error opening url"
s = f.read().replace('\r','').replace('\n','')
if '<problem' in s:
return "Problem retreaving weather (invalid city?)"
weather = s.split('</current_conditions>')[0] \
.split('<current_conditions>')[-1] \
.strip('</>')
wdict = dict(i.split(' data="') for i in weather.split('"/><'))
return wdict
and example of use:
>>> weather = getWeather('94043')
>>> weather
{'temp_f': '67', 'temp_c': '19', 'humidity': 'Humidity: 61%', 'wind_condition': 'Wind: N at 21 mph', 'condition': 'Sunny', 'icon': '/ig/images/weather/sunny.gif'}
>>> weather['humidity']
'Humidity: 61%'
>>> print '%(condition)s\nTemperature %(temp_c)s C (%(temp_f)s F)\n%(humidity)s\n%(wind_condition)s' % weather
Sunny
Temperature 19 C (67 F)
Humidity: 61%
Wind: N at 21 mph
PS. Note that a fairly trivial change in Google output format will break this - say if they were to add extra spaces or tabs between tags or attributes. Which they avoid to decrease size of http response. But if they did, we'd have to get acquainted with regular expressions and re.split()
PPS. how str.split(sep)
works is explained in the documentation, here is a excerpt: Return a list of the words in the string, using sep as the delimiter string. ... The sep argument may consist of multiple characters (for example, '1<>2<>3'.split('<>') returns ['1', '2', '3']). So 'text1<tag>text2</tag>text3'.split('</tag>')
gives us ['text1<tag>text2', 'text3']
, then [0]
picks up the 1st element 'text1<tag>text2'
, then we split at and pick up 'text2' which contains the data we are interested in. Quite trite really.
精彩评论