I have a large project in SVN with classical structure: myproject/ branches/ developer1-mybranch1/ developer2-mybranch3/
So I am working on a really large code base, more than 3000 files, more than 1 million lines of code and more than 500+ tables.
Hi I am using jQuery and retrieving \"items\" from one of my mySQL tables. I have around 20,000 \"items\" in that table and it is going to be used as a search parameter in my form. So basically they c
I am planning to use mysqlto store my datasets. I have about 10^8 (hundred million) records: ID(int), x(float), y(float), z(float), property(float).
I have a InnoDB table that has about 17 normalized columns with ~6 million records. The size of the table is ~15GB. The queries from the table is starting to take too long and sometimes timeout/crash.
I am developing a map rendering application for Android. The map data is quite big about 1.1 Gb. Since there are limits both in the market and in the phone for .apk size the recommendations is to down
I have a MySQL database with 21M records and I\'m trying to do an update on about 1M records but the query fails with ERROR 1206 (HY000): The total number of locks exceeds the lock table size.
Each item is an array of 17 32-bit integers. I can probably produce 120-bit unique hashes for them. I have an algorithm that produces 9,731,643,264 of these items, and want to see how many of these a
Im using WCF for sending all sorts of messages and this message in perticular is about 3200000 bytes plus some strings and headers. The large paylode is a serialized object retrieved from the host thr
It is my first time to create a program with file reading and writing involved. Actually I\'m wondering what is the best technique on doing this. Because when I compared my work with my classmate, our