开发者

Insert data from another DB in tables

I'm having some issue here. Let me explain.

So I was about done with migration of this project and I've decided to run the test suite to make sure the logic was still working as expected. Unfortunately, it didn't... but that's not the issue.

At the end of the suite, there was a nice script that execute a delete on the datas of 5 tables of our developement database. That would be fine if there was also a script to actually populate the database...

The good side is that we still have plenty of data in production environement, so I'm looking f开发者_运维百科or a way and/or possibly a tool to extract the data on these 5 particular tables in production and insert them in dev environement. There is all sort of primary and foreign key between these tables, maybe auto-increment fields, (and also A LOT of data) that's why I don't want to do it manually.

Our database is db2 v9 if it makes any difference. I'm also working with SQuirreL, there might be a plugin, but I haven't found yet.

Thanks


This is sort of a shot in the dark, as I've never used db2, but from previous experience, my intuition immidiately says "Try csv". I'm willing to bet my grandmother you can import / export csv-files in your software ( why did i just start thinking of George from Seinfeld? ) This should also leave you with FKs and IDs intact. You might have to reset your auto increment value to whatever is appropriate, if need be. That, of course, would be done after the import

In addittion, csv files are plaintext and very easily manipulated should any quirks show their head.

Best of luck to you!


Building on Arve's answer, DB2 has a built-in command for importing CSV files:

IMPORT FROM 'my_csv_file.csv'
OF del
INSERT INTO my_table

You can specify a list of columns if they are not in the default order:

IMPORT FROM 'my_csv_file.csv'
OF del
-- 1st, 2nd, 3rd column in CSV
METHOD P(1, 2, 3)
INSERT INTO my_table
(foo_col, bar_col, baz_col)

And you can also specify a different delimiter if it's not comma-delimited. For example, the following specifies a file delimited by |:

IMPORT FROM 'my_csv_file.csv'
OF del
MODIFIED BY COLDEL|
-- 1st, 2nd, 3rd column in CSV
METHOD P(1, 2, 3)
INSERT INTO my_table
(foo_col, bar_col, baz_col)

There are a lot more options. The official documentation is a bit hairy:

  • DB2 Info Center | IMPORT command


Do you have access to the emulator? there's a function in the emulator that allows you to import CSV into tables directly.


Frank.

Personally, I am not aware of any automated tools that can "capture" a smaller subset of your production data into a test suite, but in my day, I was able to use QMF and some generic queries to do just that. It does require forward planning / analysis of your table structures, parent-child dependencies, referential integrity and other things.

It did take some initial work to do, but once it was done, I was able to use, and re-use these tools to extract several different views of production data for my testing purposes. If this appeals to you, read on.

On a high-level view, you could do this:

  1. Determine what the key column names are.
  2. Create a "keys" table for them.
  3. Write several queries to look for your test conditions and populate the keys_table.
  4. Once you are satisfied that keys_table has a satisfactory subset of keys, then you can use your created tools to strip out the data for you.
  5. Write a generic query that joins the keys_table with that of your production tables and export the data into flat files.
  6. Write a proc to do all the extractions / populations for you automatically.

If you have access to QMF (and you probably do in a DB2 shop), you may be able to do something like this:

  1. Determine all of the tables that you need.
  2. Determine the primary indexes for those tables.
  3. Determine any referential integrity requirements for those tables.
  4. Determine Parent - Child relationships between all the tables.
  5. For the lowest level child table (typically the one with most indexes) note all the columns used to identify a unique key.

With the above information, you can create a generic query to strip out a smaller subsection of production data, for #5. In other words, you can create a series of specific queries and populate a small Key table that you create.

In QMF, you can create a generic query like this:

select t.*
  from &t_tbl   t
     , &k_tbl   k
 where &cond
 order by 1, 2, 3

In the proc, you simply pass the tablename, keys, and condtions variables. Once the data is captured, you EXPORT the data into some filename.

You can create an EXPORT_TABLE proc would look something like this:

run query1 (&&t_tbl = students_table , &&k_tbl = my_test_keys ,
+ &&cond = (t.stud_id = k.stud_id and t.course_id = k.course_id)
export data to studenttable

run query1 (&&t_tbl = course_table   , &&k_tbl = my_test_keys ,
+ &&cond = (t.cour_id = k.cour_id 
+ (and t.cour_dt between 2009-01-01 and 2010-02-02)
export data to coursetable

.....

This could capture all the data as needed.

You can then create an IMPORT_TEST proc to do the opposite:

import data from studenttable
save data as student_table (replace = yes

import data from coursetable
save data as course_table (replace = yes

....

It may take a while to create, but at least you would then have a re-useable tool to extract your data.

Hope that helps.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜