I have 500 CSV files in this format: IndicatorA_Name.csv 190019011902 ... Norway32 Sweden133 Denmark 523 ...
I am new in designing a ETL process. Currently I have two database, one is the live database where the application use it for every day transaction. The other one is the data warehouse.
I need ever开发者_如何转开发y day to transfer large amounts of data (about several millions records) from db2 to oracle database. Could u suggest the best perfoming method to do that?DB2 will allow yo
I have an C# application that performance ETL process. For self referencing table, the application will run \"ALTER TABLE [tableName] NOCHECK CONSTRAINT [constraintName]\" which turns off any FK const
I have several billion rows of data in CSV files. Each row can have anything from 10 to 20 columns. I want to use COPY FROM to load the data into a table containing 20 columns. If a specific CSV row o
Pan/ Kitchen of Spatially enabled version of Pentaho Data Integration (Kettle) version 3.2 hangs at \'New Database Defined\'. I am trying to extract some data from a MS Sql Server 2005 table through
i have a sql proc that returns 20,000 + records and want to get this data into a CSV for a SQL2005 bulk load operation.
I am trying to pull together a complex Kettle transformation on a set of patient data.I have several table input steps that query a MySQL table that assembles patient rows into a stream.Each table inp
First the generalized problem: I need to make an ETL solution that users will be able to launch from a cloud application. I would like to make a solution where they do some simple mapping and my progr
I have a very annoying task. I have to load >100 CSV-files from a folder to SQL Server database. The files have column names in first row. Data type can be varchar for all columns. The table names in