开发者

A 360 degree Sphere panorama into Cube panorama transformations algorithm (pseudocode or at least full logic wanted)

So we can take such image from wikipedia

A 360 degree Sphere panorama into Cube panorama transformations algorithm (pseudocode or at least full logic wanted)

And try to map it for future cube or something like cube

A 360 degree Sphere panorama into Cube panorama transformations algorithm (pseudocode or at least full logic wanted)

A 360 degree Sphere panorama into Cube panorama transformations algorithm (pseudocode or at least full logic wanted)

And than distort for top and bottom like

A 360 degree Sphere panorama into Cube panorama transformations algorithm (pseudocode or at least full logic wanted)

Some one may think that doing disturtion only for half and than triing to fill it would work

A 360 degree Sphere panorama into Cube panorama transformations algorithm (pseudocode or at least full logic wanted)

it would not=( a开发者_如何转开发nd content aware filling would not help filling that square=(

but it looks bad if you will try to render such cubic panorama.

Another way that I can imagine is to render 3d panorama onto sphere and than somehow take snapshots/projections of it onto cube... but I do not know how to write it down wit simple math operations (idea here is not to use rendering engines but to do it as mathematically as possible)


Jim,

I am Ken Chan the primary architect of the Quadrilateralized Spherical Cube (QLSC). You can look up Google for many references to the 1975 report "Feasilibilty Study of a Quadrilateralized Earth Data Base" which I co-authored with my colleague Mike O'Neill. I did all the formulation and mathematical analysis and Mike did all the software design and coding. I still have the report somewhere. I believe the code is in an appendix in the back, but I cannot testify to that.

There was an earlier report "Organizational Structures for Constant Resolution Earth Data Bases" in 1973 which I co-authored with two other colleagues (Paul Beaudet and Leon Goldshlak) at Computer Sciences Corporation (CSC). Leon was the project manager. Paul proposed one structure and I proposed four. The QLSC was one of my four conceptualizations and was subsequently chosen by the Navy for adoption. No code was developed for any of these models.

I have been away from that area of work for more than 35 years but I was aware that NASA Goddard in Greenbelt, Maryland eventually used QLSC for its COBE mission. I also became aware that the QLSC (or some derivative of it) was used by astronomers and astrophysicists in the US and Europe for star-mapping because of its equal area properties as well as its heirarchical indexing scheme.

Lately, I have also become aware that the basic organizational structure has been used in Hyperspectral Data Management and Compression.

I just turned 70 years old a few days ago and nothing makes me feel more satisfied that I am leaving behind something that other people can use. The thought of patenting it never crossed my mind when I developed the approach. Also, the thought of naming it the "Chan Spherical Cube" (to be abbreviated CSC) was rejected by Computer Sciences Corporation and by me.

I hope this gives you some idea of the history of the QLSC.

Ken


There's a map projection called the Quadrilateralized Spherical Cube that's used in astrophysics to represent all-sky maps. It has a nice property that the pixels are within a few percent of having equal areas all over the sky, so that geometric distortions are reduced.

Basically, the celestial globe is projected onto a cube, and each cube face is divided into pixels; but rather than being a rectilinear grid, the row and column boundaries are slightly curved so that each pixel maps to a roughly equally sized area on the sphere.

The pixel addressing is kind of interesting. Suppose you have a pixel with coordinates X,Y on one of the cube faces. If X has binary representation abcd, and Y is ABCD, then the pixel address on that face has X and Y interleaved: aAbBcCdD. So to rebin the image to larger pixels, all you need to do is shift right 2 bits to get the pixel address at the lower resolution.

With 32-bit pixel addresses, you can use 3 bits to represent the cube face, and 28 bits to represent the interleaved X and Y coordinates within that face. At this resolution, each pixel covers an area of about 20x20 arcsec, or about a third of a mile square(ish) -- so one could make good use of this as a sort of geographic or celestial coordinate hashing technique.

To use this, you'd have to implement forward transformations (long, lat) or (RA, dec) to pixel numbers, and inverse transformations going from pixel numbers to (long, lat) or (RA, dec). And of course there are tons of well-known map projections from image coordinates to (long,lat) and back.

I didn't find any code for this in a few minutes of Googling -- maybe I can dig up some code I wrote about 20 years ago when I worked on the EUVE astrophysics mission, which used this projection for their all-sky survey maps.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜