What might be the best method of storing Mandelbrot values in a database?
I'm currently experimenting with rendering a Mandelbrot set and I've quickly come to realize that it would be useful to not have to recalculate the maximum iteration count for each rendering...on the other hand it's a lot of data to keep track of. It seems to me (based on my limited experience with RDMSes) that a relational database is probably not the way to go because I don't want the performance to be impacted as the data set gets larger. It almost seems like the perfect situation for a hash table, but I've never used one before and can't seem to work out how to use or manage one in one of the existing web server languages (Python/PHP/whatever).
To be a little more explicit: the important values to be stored are:
- The original real part of a number on the complex plane
- The original imaginary part of a number on the complex plane
- The number of max iterations
- The number of completed iterations n before max iterations is hit or until the point runs off to infinity
- The final real part of a number on the complex plane after n iterations
- The final imaginary part of a number on the complex plane after n iterations
At any given time, Given the original real part, the original imaginary part, and the max number of iterations, I'd like to be able to get a result set with the final real and imaginary parts.
So what do you think? Is a hash table the way to go? Is the problem too complex for mere mortal data structures?
Any help at all would be greatly appreciated. Thanks in advance!
Edit
I'll expound on the problem a bit at the kind request of julienaubert.
The goal I have is to allow a user to zoom in on a Mandelbrot set without the calculation delay (even if it is through pre-defined zooms). I also want to be able to do this in a browser that is constantly asking the server for a new data array give the new x and y coordinates and the height and width to be viewed on the complex plane. However, since calculating the pixel color value can be done much more quickly (given max_iter, real_final, and imag_final), and also since it would be nice to allow the user to adjust color settings, I'm going to only be sending the browser the variables enumerated in my post and let the user's browser calculate the color.
Take a look at this:
http://jsfiddle.net/xfF3f/
If you take a look at the drawMandelbrot() function, you can see that the point loops are storing the important values in a variable called dataset. This variable is then used in the drawMandelbrotFromData() function where it performs the remaining calculations needed to figure out the color for each pixel.
If you click "cleardabrot" it replaces the canvas with a white rectangle. If you click "refilldabrot", it runs the drawMandelbrotFromData() function again...this is done to show you how quickly it can actually render the set if only it didn't have to perform the painful iterative calculations.
So the ultimate goal here is to be able to calculate these values to arbitrary precision, so a user can zoom to any level of the set, have the server figure out if there are any data for those exact points (or, preferably, points NEAR those exact points...though I'm not sure how this could be done without performing a range query of some sort), and then spit back the information on a pixel-by-pixel basis. For example...
- A user is using a 300x300 canvas.
- He zooms to a point where the top left corner is
x = .000001
andy = .0000231
. - His chosen width and height in this frame is
w = .00045
andh = .00045
He would send those numbers off to the server and receive, in turn, an array with 300*300 indexes (one representing each point)开发者_如何学编程, each containing the requisite information to determine the color of each pixel on his canvas. My question here is...what is the best way to store pre-calculated Mandelbrot data such that a user can input arbitrary x, y, w, and h values and quickly pull back the values of the points on the complex plane in that range.
At any given time, Given the original real part, the original imaginary part, and the max number of iterations, I'd like to be able to get a result set with the final real and imaginary parts.
It is not clear from your question why you need this? Why do you need to re-compute at the same point?
In case you are experimenting with different max_iterations settings, you can just save the actual_iterations taken at a per-pixel level in a binary-file, text-file or image or whatever you find convenient to load/store, e.g. a relational database.
If you are doing real-time rendering and you are using some processing which requires re-calculating the recurrence equation (at the same original point and with the same max iterations), then I would imagine you could speed this up by having a look-up table.
Obviously, your look-up table must be faster than doing the computation. You need a look-up table for which the below operations in total take less than doing the computation again.
- calculate index (given origo_real,origo_imag,max_iter)
- load the cached computation (final_real, final_imag, actual_iter)
- one initial store
Depending on how you will re-compute / re-access at the same points, you could divide your problem in such a way that it is highly likely that the index is in the look-up table and the look-up table is small enough to be stored in a L1 or L2 cache.
Those are some ideas.. but you should clarify what your real problem is.
In case you just need a lot of this data for further analysis and real-time is not the requirement, then well... clarify what your real issue is :)
answer for the update
It seems similar to using a map service (zoom in/out, move around), that is, you are essentially delivering an image for a given area and zoom.
However in this case, since any zoom-level may be queried, whatever you cache for one user, may not be re-used for the next user. I am not sure why it would make sense to do it this way rather than writing a client software in which the user can zoom real-time (which has been done).
In any case. If your main problem is bandwidth but you have plenty of computation power, then you could store images of a calculated patch in a highly compressed file, with a bit lower quality and cache these images. You may then need to stitch patches of these together to deliver the exact area the user wanted.. the trick would be to query the minimal set of patches given the zoom and area.
I fear that most queries would be asking for patches which does not exist (since any zoom-level is possible). Perhaps some info on how e.g. Google Maps / GIS systems work could give you some ideas. If your main problem is CPU, then maybe you could do this differently and let the user do the computation in an Applet (and possibly send back the result)
If you are doing this to learn how to cache/compute over client-server, you might want to consider a different challenge, as this one can be solved on client side by any decent computer.
精彩评论