Tuesday, February 21, 2012

Reply to Importing Error

Hi all,

I am facing problem on importing csv data file into table,


For Ex:


csv file which has 2 columns & 4 records are as follows

EmpNo EmpName

1 a
2 b
3 c
1 a

I created new table called T4, structure of T4 is

EmpNo Varchar(50) with Primary Key
EmpName Varchar(50) disabled Allow Nulls

Now I started importing csv data to T4 table, during importing it couldn't finish entire process. Throwing error, when I remove primary key with disable Allow Nulls for both constraints of a table T4. Importing data successfull.
My question is if I have above said structure for the table T4 (with primary Key), during importing instead of throwing error let it be import first 3 rows then throw error as primary key enabled could not able to import. Is this can be possibe, if so please suggest me to solve the problem.Can I use Bulk Insert or SQL Scripts to solve this problem.

Thanks in advance
Karna

I believe that I understand your current situation, but I do not think I understand what it is you are trying to do.

If you're trying to insert the three "good rows" and "fail" the one "bad" row without failing the package as a whole, you may be able to accomplish this by adding an error output to the destination to which you're writing these records. Then the "bad" records will be redirected to that portion of the data flow, where you can count them, log them, or ignore them if you want.

Is this what you're trying to accomplish? If so, please let us know if this technique gives you the solution you need. If not, please see if you can rephrase your goal, and I can try again.

|||Hi Mathew,

Thanks for your reply. I am executing wizard in SQL Server 2005. Actual problem is "I am preparing a job which can import CSV data file to the table. Table already has primary key & disabled allow null option. Here in CSV file it has duplicate records & empty rows. During importing of job is not successfully done, it is throwing error." My question is if the duplicate data lies in 1000th record, let job be success for until 999 record. Then let it can show error for 1000 record & logged .
Please suggest me to how to succeed the job.

Thanks in advance
Karna|||

I'm honestly not certain how I would go about doing that. For most situations where I'm using SSIS, the behavior you're trying to achieve would be inappropriate for my needs (I would either need the entire load to fail with no rows written to the destination, or else have error rows redirected to a log file or table) so I've never tried to set things up this way. I'm also pretty unfamiliar with the Input/Output Wizard, so that's two strikes against me.

With that said, the first thing I would try would be to update the destination component in the package (i do not know if this is exposed through the wizard - i assume it is not) to turn off fast load. This will slow things down quite a bit as each row will be inserted into the destination database one by one, but it might give you what you need.

Perhaps someone else has more experience with this problem than I...

|||Hi Mathew,

Actually I have to run a job for which it has to import csv data file to table. CSV has duplicate records. My job should not import duplicate records, it should append only actual records. If I set primary key & disabled allow null option. After this when I run the job, job is not even appending actual records (which is not duplicated), job is throwing error which is not importing any data from csv file.
Please suggest me to solve this issue.

Thanks in advance
Karna|||Hi Mathew,

Actually I have to run a job for which it has to import csv data file to table. CSV has duplicate records. My job should not import duplicate records, it should append only actual records. If I set primary key & disabled allow null option. After this when I run the job, job is not even appending actual records (which is not duplicated), job is throwing error which is not importing any data from csv file.
Please suggest me to solve this issue.

Thanks in advance
Karna|||

Karna,

I think you best option use the error output in the destination component to redirect the errors. If you don't change the default value of the error configuration; then the package will fail as soon as the 1st duplicate hit the destination table. This is nothing diffrent from what Matthew has already suggested.

|||Hi,

Thanks for reply. My problem is I need to import CSV data file to one of the table in the database. During importing it shouldn't import null rows & same rows to the table. The method which should checks the data in the table before importing, if data exists it shouldn't import any data else it should. Please suggest me to solve this problem.

Thanks in advance
Karna|||

See this link for a number of different ways to see if a row exists:

http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=1211340&SiteID=1

No comments:

Post a Comment