开发者

Why do people want to deliver both Json and XML as output to their REST interfaces?

I understand why "REST framework" vendors want to provide the support for returning both Json based representations and XML based representations, but why do people want to return both from the same service?

  • Is it because you will have client applications that are built on a platform that has no available Json parser?

  • Is it because you are hoping for wider adoption of the interface because you can appeal to more people?

  • Is it because you feel that it a standard convention that all RESTful interfaces follow?

If you do deliver both:

Do you avoid namespaces in the XML so that it can be compatible with the Json format? Or do you have just one namespace for all of your data elements?

Do you have some kind of standardized mechanism for mapping attributes and elements into some kind of consistent Json format, or do you just avoid attributes in your XML?

Do you create different endpoints for each representation, or do you use content negotiation to deliver the requested format? Do you have a default format?

If you use caching on editable resources and use different URLs, how do you ensure that when one representation is invalidated that the other representations are also invalidation?

Do you feel the benefit of supporting multiple formats is worth the effort required?

Summary of responses:

So the primary reason seems to be one of preference. Some developers prefer curly braces and some prefer angle brackets.

Some people want to migrate from XML to Json and therefore supporting both i开发者_高级运维s required for backward compatibility.

Some want to use Json, but are concerned that some developers are scared of Json, so they support both so as not to offend anyone.

It is easy to turn the feature on in framework XYZ so why not!

Another interesting suggested reason, is JSON can be used to provide a quick a dirty data summary and XML can be used as a semantically rich complete representation.


A completely different reason than what's been said so far --

REST interfaces are about Resources, and each Resource has an identifier, which are URLs. Just because you want the Resource in a different serialization, be it XML, JSON, HTML, or something else, we're still describing the same Resource.

So, instead of giving a different path to the XML vs. the JSON one, we use the 'Accept' header to determine what the client is interested in. In some cases, services use the 'Accept-Language' header to determine what language they should use for their metadata.

If we assign different identifiers to different serializations of the records, for the semantic web, we then have to embed extra information to link to all of the records that describe the 'same' object.

You can find more information about these efforts under the term Linked Data, although that typically refers to using RDF at the serialization.

update : with the discussion of linking to specific formats, I'd also recommend people consider reading up on the Functional Requirements for Bibliographic Records (aka FRBR), which has a conceptual model for the relationships between 'Book' as an abstract 'Work', vs. the physical 'Item', and the levels in between. There has been a bit of discussion in the library, information and semantic web communities on FRBR, including how it relates to digital objects.

Basically, the issue is that you can assign identifiers at a number of levels (eg, the Resource, and the text of the metadata about the Resource, or the serialization of the text of the metadata about the Resource), and each have their own use.

You might also see OAI-ORE for a specification for reporting relationships between objects, including alternate formats or languages.


Json is often suitable for client side scripts. It is a super-lightweight response and the most part of JavaScript frameworks come with a parser built-in. On the other side, many server side applications and languages still rely heavily on XML. Just to name one: Java.

Of course, XML can be parsed with JavaScript and Java (and the most part of other programming languages) has at least one Json parser. But at the moment this seems to be the most common practice.

Talking about the "implementation vs benefit" topic, I mostly work with Ruby and I can tell you Ruby on Rails provides the ability to create a Json or XML response in less than a couple of seconds from the same source. In this case it's not a problem and I usually add that feature if I think it could be useful.

With other technologies, for example PHP, it would require more effort and the implementation would have a different cost. Unless this would be a fundamental feature, I would probably stick with one response until I don't find the need to provide to different versions.


I have written a pretty verbose article myself on the History of REST, SOAP, POX and JSON Web Services. Basically goes into detail about the existence and benefits of the different options, unfortunately its too long to list all the contents here.

Basically XML is more verbose, stricter and verifiable which makes it a good candidate for interoperability but not that great of a programmatic fit for most programming languages. It also supports the concept of a schema (i.e. metadata about the data) which can be found in XSD/DTD documents. A WSDL is an extension of an XSD and also supports describing web services in infinite details (i.e. SOAP Web Services).

JSON is more lightweight, loosely-typed text format which as is effectively 'Serialized JSON' to provide the best programmatic fit for JavaScript since it a JSON string can be natively eval()'ed into a JavaScript object. It's lack of namespaces and concepts attributes/elements make it a better fit for most other programming languages as well. Unfortunately it only has support for the basic types: Number, String, Boolean, Object and Arrays. Which does not make it the best choice for interoperability.

I have some Northwind database benchmarks comparing the two and it looks like XML is on average 2x the size of JSON for the equivalent dataset. Although if your XML document contains many different namespaces the payload can blow out to much more than that.


I have not direct insight into this as I only produce REST interfaces. for "internal" consumption.

I'm guessing providers of public APIs are merely "hedging their bet", in this ever- evolving, fast paced and competitive environment.

Furthermore, for hanlding relatively simple object models (which most of these probably are), the extra effort to handle both format is probably minimal and hence worthwhile.


I think the "why people do it" is pretty situational. If developing an application for a potential wide range of clients, supporting multiple content types might increase marketability - both to people who understand what different content types mean and to people who don't, but like things that support today's latest and greatest buzzwords.

Some reasons for supporting both are probably more technically justified. An application might require the ability for ajaxy browser clients to grab information for pages (for which JSON would be good), and also might need to support some standalone API clients that may do background or batch processing, for which XML libraries are more convenient.

I should hope that using content negotiation would be preferred over different endpoints, since using different endpoints would give REST resources multiple URIs for the same resource, which can make for an ambiguous and sometimes confusing API.

In the end, I think the "worth the effort" value solely depends on whether or not you know you can get the return on your investment in supporting multiple content types. If nobody's going to use one of the two content types, why support both? Sure they might be cool, but in a lot of cases probably fall under YAGNI as well.


I wouldn't read too much into it. I think some developers prefer one over the other and (especially depending on your framework) it's pretty easy to provide both.

Most of the APIs I've seen that take this approach don't bother with XML namespaces


Really a lot of developers don't understand JSON. I know it's easy light weight etc., but a lot of programmers don't want to spend the cycles to figure it out. They know XML, they are comfortable with it, and at the end of the day, that's really what they want to use. JSON also has this stigma of being associated with JavaScript, and that automatically makes it evil to a lot of people.

Supporting both really depends on the audience you are writing the API for, if it's a lot of business programmers who use older technologies, then yes, it's worth supporting both. If you're building it for the part of the tech industry that wants to be close to the edge, then it may not be worth supporting xml. Where I work, we have to support both, and it's worth it for us to do so. We know our clients and what they want and they pay us to provide that for them.


In many cases the service started out with XMP / SOAP, that was the only solution a few years ago. However recently (last 2 years or so) JSON has become more and more popular, so most services decided to also support JSON, and since they already had an XML interface they just kept it


Personally, I prefer to only server JSON as it avoids an angle tax on bandwidth. Also, the fact that JSON is a very lean spec is appealing as well.

From experience, Java and C# developers like the ability to have XML reflected in their objects; this creates a static-typing euphoria effect where things can't go wrong as JSON is more prone to dynamic behavior (i.e. mysticism or lispism).

PHP and Ruby programmers tend not to care.

AJAX developers prefer JSON as eval() is their parser (which is built-in and fast).


It depends on how your service is going to be consumed. I am working a service currently, which exposes both JSON and XML.

  1. Since some of my clients will be mobile apps, JSON suits them well - less processing power is needed to parse JSON compared to XML.
  2. Some of my clients will be web pages with JavaScript. Since JSON is a first class citizen in Java Script, and since we cannot really be sure of compute power of the system in which the browser runs, JSON makes perfect sense.
  3. Other clients are server side components, and they can handle XML easily, and since the developers of that team are conversant with XML they prefer XML.

Hence with this mix of clients for my service, both JSON and XML make perfect sense.

We use accept header to determine the kind of response to return. And using Jersey with Jackson makes it really easy. No special coding needed to handle each one separately. We do not use namespaces, and do not use attributes.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜