Thursday, January 31, 2019

Fast Export vs Fast Load (Teradata)





**************
FAST EXPORT
**************
  • Used to export data from Teradata into flat files
  • Can generate data in report format
  • Data can be extracted from one or more tables using join
  • Deals in block export (Useful for extracting large volumes)
  • Is the ability to ship data over multiple session connections simultaneously thereby leveraging the total connectivity available between the client platform and the database engine. In order to do this,Fastexport spends more resources on executing the query in order to prepare the blocks in such a way that when they are exported over multiple sessions they can easily be reassembled in the right order by the client without additional sorting or processing of the rows.



Note:
  -The block size is an indication of the message size which carries the rows exported per message
-The block size(63 or 64 kb) is not user modifiable in FastExport
******************
SAMPLE FAST EXPORT
*********************
.LOGTABLE RDA_D_ODM_T.ft_export_intvl_error;
.logmech LDAP;
.LOGON TDDBPRDRPT/p2ki,Nutrition#0987;
.BEGIN EXPORT;
.EXPORT OUTFILE /s3/rd//test_fexp.csv Format TEXT MODE RECORD;
select top 10 cast((sa_id||','||uniq_sa_id||','||usg_dt||','||
cast(elec_intvl_end_dttm as CHAR(19))||','||intvl_lgth||','||uom||','
||usg_amt||','||ener_dir_cd||','||
dst_shft_amt||','||imp_sta) as CHAR(150))
from RDA_P_ODM.elec_intvl_usg_all where usg_dt ='2017-12-29';
.END EXPORT;

****************
FAST LOAD
*****************
  • FastLoad utility is used to load data into empty tables. Since it does not use transient journals, data can be loaded quickly. 
  • It doesn't load duplicate rows even if the target table is a MULTISET table
  •  Uses buffer size (usually 63 or 64 kb)
  • Works directly on the target table
  • Multiple AMPs are working on receiving, deblocking, data type transformation, hashing and redistribution of the data
  • Capable of inserting data into an empty table. This obviates the need for journalist because
  •  If it doesn't finish properly, the proper recovery is to simply delete the table and start over.
  • Easy to add indexes and other table attributes after the table has been loaded with the data.
  •  No need for journals because the data is either loaded or it is not. No need to roll forward or back.
******************
SAMPLE FAST LOAD
*********************

LOGON 192.168.1.102/dbc,dbc;  
   DATABASE tduser;  
   BEGIN LOADING tduser.Employee_Stg  
      ERRORFILES Employee_ET, Employee_UV  
      CHECKPOINT 10;  
      SET RECORD VARTEXT ",";  
      DEFINE in_EmployeeNo (VARCHAR(10)), 
         in_FirstName (VARCHAR(30)), 
         in_LastName (VARCHAR(30)), 
         in_BirthDate (VARCHAR(10)), 
         in_JoinedDate (VARCHAR(10)), 
         in_DepartmentNo (VARCHAR(02)), 
         FILE = employee.txt;
      INSERT INTO Employee_Stg (
         EmployeeNo,
         FirstName,
         LastName,
         BirthDate,
         JoinedDate, 
         DepartmentNo
      ) 
      VALUES (  
         :in_EmployeeNo, 
         :in_FirstName, 
         :in_LastName, 
         :in_BirthDate (FORMAT 'YYYY-MM-DD'), 
         :in_JoinedDate (FORMAT 'YYYY-MM-DD'),
         :in_DepartmentNo
      ); 
   END LOADING;  
LOGOFF;

LIMITATION
Target table should not have secondary index, join index and foreign key reference.










No comments:

Post a Comment