If there 10 million of records in external system how do you add into your application..
Records
It depends on the scenario and the type of data that you have in the external system.
- We can have a Job scheduler and process the records on fixed batch size (have higher traceability on the execution).
- We can configure a file listener and can upload the external data to Pega by splitting the actual file (10 Million) into smaller chunks (have full control on the records sent for processing).
- We can configure a data flow in Pega with proper partitioning and process the records in one go (have less traceability on the execution).
Please see if you can give us a small brief about your external system and the type of record (Data- or Work-) you want to import. We can suggest some optimal approach.
2 months later
- Edited
ospteam
Hi team,
In any of the these approach, how are we going to split the actual data?
- If Job scheduler-> if its fixed batch size, how are we going to do this. Whenever we start the job scheduler, it processes from the first record always right? or are we going to handle with any when condition?
- If file listener-> How to split the actual file into smaller chunks? If we are having a single file with millions of records, how does this work?
- If data flow-> How partition the data? If we are souring data set from SFTP file, how can we do partition? Doesn't it start processing from the first record?