开发者

How to store JSON data on local machine?

This question has plagued me for months, now, and no matter how many articles and topics I 开发者_StackOverflowread, I've gotten no good information...

I want to send a request to a server which returns a JSON file. I want to take those results and load them into tables on my local machine. Preferably Access or Excel so I can sort and manipulate the data.

Is there a way to do this...? Please help!!


Google comes up with this: json2excel.

Or write your own little application.

EDIT

I decided to be nice and write a python3 application for you. Use on the command line like this python jsontoxml.py infile1.json infile2.json and it will output infile1.json.xml and infile2.json.xml.

#!/usr/bin/env python3

import json
import sys
import re
from xml.dom.minidom import parseString

if len(sys.argv) < 2:
    print("Need to specify at least one file.")
    sys.exit()

ident = " " * 4

for infile in sys.argv[1:]:
    orig = json.load(open(infile))

    def parseitem(item, document):
        if type(item) == dict:
            parsedict(item, document)
        elif type(item) == list:
            for listitem in item:
                parseitem(listitem, document)
        else:
            document.append(str(item))

    def parsedict(jsondict, document):
        for name, value in jsondict.items():
            document.append("<%s>" % name)
            parseitem(value, document)
            document.append("</%s>" % name)

    document = []
    parsedict(orig, document)

    outfile = open(infile + ".xml", "w")
    xmlcontent = parseString("".join(document)).toprettyxml(ident)
     #http://stackoverflow.com/questions/749796/pretty-printing-xml-in-python/3367423#3367423
    xmlcontent = re.sub(">\n\s+([^<>\s].*?)\n\s+</", ">\g<1></", xmlcontent, flags=re.DOTALL)
    outfile.write(xmlcontent)

Sample input

{"widget": {
    "debug": "on",
    "window": {
        "title": "Sample Konfabulator Widget",
        "name": "main_window",
        "width": 500,
        "height": 500
    },
    "image": { 
        "src": "Images/Sun.png",
        "name": "sun1",
        "hOffset": 250,
        "vOffset": 250,
        "alignment": "center"
    },
    "text": {
        "data": "Click Here",
        "size": 36,
        "style": "bold",
        "name": "text1",
        "hOffset": 250,
        "vOffset": 100,
        "alignment": "center",
        "onMouseUp": "sun1.opacity = (sun1.opacity / 100) * 90;"
    }
}}

Sample output

<widget>
    <debug>on</debug>
    <window title="Sample Konfabulator Widget">
        <name>main_window</name>
        <width>500</width>
        <height>500</height>
    </window>
    <image src="Images/Sun.png" name="sun1">
        <hOffset>250</hOffset>
        <vOffset>250</vOffset>
        <alignment>center</alignment>
    </image>
    <text data="Click Here" size="36" style="bold">
        <name>text1</name>
        <hOffset>250</hOffset>
        <vOffset>100</vOffset>
        <alignment>center</alignment>
        <onMouseUp>
            sun1.opacity = (sun1.opacity / 100) * 90;
        </onMouseUp>
    </text>
</widget>


It's probably overkill, but MongoDB uses JSON-style documents as it's native format. That means you can insert your JSON data directly with little or no modifications. It can handle JSON data on its own, without you having to jump through hoops to force your data into a more RDBMS-friendly format.

It is open source software and available for most major platforms. It can also handle extreme amounts of data and multiple servers.

Its command shell is probably not as easy to use as Excel or Access, but it can do sorting etc on its own, and there are bindings for most programming languages (e.g. C, Python and Java) if you find that you need to do more tricky stuff.

EDIT:

For importing/exporting data from/to other more common formats MongoDB has a couple of useful utilities. CSV is supported, although you should keep in mind that JSON uses structured objects and it is not easy to come up with a direct mapping to a table-based model like CSV, especially on a schema-free database like MongoDB.

Converting JSON to CSV or any other RDBMS-friendly format comes close to (if it does not outright enter) the field or Object-Relational Mapping which in general is neither simple nor something that can be easily automated.

The MongoDB tools, for example, allow you to create CSV files, but you have to specify which field will be in each collumn, implicitly assuming that there is in fact some kind of schema in your data.

MongoDB allows you to store and manipulate structured JSON data without having to go through a cumbersome mapping process than can be very frustrating. You would have to modify your way of thinking, moving a bit away from the conventional tabular view of databases, but it allows you to work on the data as it is intended to be worked on, rather than try to force the tabular model on it.


Json (like xml) is a tree rather than a literal table of elements. You will need to populate the table by hand (essentially doing a stack of SQL LEFT JOINS) or populate a bunch of tables and manipulate the joins by hand.

Or is the JSON flat packed? It MAY be possible to do what you're asking, I'm just pointing out that there's no guarantee.

If it's a quick kludge, and the data is flatpacked then a quick script to read the json, dump to csv and then open in Excel will probably be easiest.


Storing in Access or Excel can not be done easily I guess. You would have to essentially parse the json string with any programming language that supports it (PHP, NodeJS, Python, .. all have native support for it) and then use a library to output an Excel sheet with the data.

Something else that could be an option depending on how versed you are with programming languages is to use something like the ElasticSearch search engine or the CouchDB database that both support json input natively. You could then use them to query the content in various ways.


I've kinda done that before. Turn JSON into HTML table. that means, you can turn into csv. However here are something you need to know 1) JSON data must be well format into predefined structure. e.g.

 {
   [
    ['col1', 'col2', 'col3'],
    [data11, data12, data13],
    ...
   ]
 }

2) U have to parse the data row by row, column by column. and you have to take care of missing data or unmatch column, if possible. Of course, you have to aware of data type.

3) My experience is, if you have ridicuously large data, then doing that will kill client's browser. You have to progressively get formatted HTML or CSV data from server.

as suggested by nightcracker above, try the google tool. :)

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜