Quantcast
Channel: Teradata Forums - Database
Viewing all 14773 articles
Browse latest View live

Use the outcome of a query as input for a new select statement - response (4) by edgar55

$
0
0

And thanks for your help, I got it running !


Converting string to date using REGEXP - Getting error 3798 - forum topic by gpolanch

$
0
0

Hello,
TD gives errors when trying to cast a date that has a single-digit day or month (ie. 1925-7-9).  So, I wrote the code below using REGEXP functions to add a leading zero to the day or month if required.  But it is giving error 3798 (A column or character expression is larger than the max size) even though I am concatenating just a few small strings.  I used similar logic to successfully convert formats like 7/9/1925. (I tried to paste that code, but I am not able to paste code very easily in this forum interface.  It worked once, but now when I Right-click and hit Paste nothing happens).   I saw other solutions for this problem on the forum that use SUBSTR, INDEX, etc, but I find that the REGEXP functions result in more understandable code.  Any help would be appreciated. 
Thanks!
-Greg
-- this results in error 3798 (A column or character expression is larger than the max size.)
SELECT
REGEXP_SUBSTR('1925-7-9', '^[[:digit:]][[:digit:]][[:digit:]][[:digit:]]-') ||
CASE WHEN CHAR_LENGTH(OREPLACE(REGEXP_SUBSTR('1925-7-9', '-[[:digit:]]-'),'-','')) = 1
         THEN '0'||OREPLACE(REGEXP_SUBSTR('1925-7-9', '-[[:digit:]]-'),'-','')
         ELSE OREPLACE(REGEXP_SUBSTR('1925-7-9', '-[[:digit:]][[:digit:]]-'),'-','')
       END ||'-'||
CASE WHEN CHAR_LENGTH(OREPLACE(REGEXP_SUBSTR('1925-7-9', '-[[:digit:]]$'),'-','')) = 1
         THEN '0'||OREPLACE(REGEXP_SUBSTR('1925-7-9', '-[[:digit:]]$'),'-','')
         ELSE OREPLACE(REGEXP_SUBSTR('1925-7-9', '-[[:digit:]][[:digit:]]$'),'-','')
       END;
 
 
 

Forums: 

Converting string to date using REGEXP - Getting error 3798 - response (1) by dnoeth

$
0
0

Wow, you think this is more understandable? :)
You can use a simple regex to add a leading zero to a single digit:

regexp_replace('1925-7-9', '\b([0-9])\b', '0\1')

 
Regarding error 3798, both REGEXP_SUBSTR and OREPLACE return VarChar(8000), if you concat you might get over the limit. The workaround is to CAST to a smaller number...

Converting string to date using REGEXP - Getting error 3798 - response (2) by gpolanch

$
0
0
SELECT regexp_replace('1925-7-9', '\b([0-9])\b-\b([0-9])\b', '0\1-0\2');

Hey Dieter,
Thanks!  Casting worked!   Your other example is interesting, looks like back-referencing, which I am not that familiar with.  But added some logic to handle the day also.  That's awesome!   Guess I havent been on this forum in a while.  Having a hard time pasting code.   It always pastes at the top of the message even if my cursor is elsewhere, and in vintage green-screen.
-Greg
 
 
 

Converting string to date using REGEXP - Getting error 3798 - response (3) by dnoeth

$
0
0

Hi Greg,
yep, the "\1" is a back-reference to the first match, i.e. the single digit.
Btw, your regex is overly complicated, it only works for exactly "single digit, minus, single digit", mine adds a leading zero whenever there's a single digit...
 

Converting string to date using REGEXP - Getting error 3798 - response (4) by gpolanch

$
0
0

Thanks Dieter.  But when I run your example, I get   1925-07-9,   the replacement only works on the first digit.  Need to also get the second digit so that the output is  1925-07-09.

Converting string to date using REGEXP - Getting error 3798 - response (5) by dnoeth

$
0
0

Hi Greg,
I think the defaults for REGEXP_REPLACE changed, it's working fine in 15/15.10, you might try

regexp_replace('1925-7-9 4 1 4', '\b([0-9])\b', '0\1', 1, 0, 'c')

The "0" is the occurance and should mean "all"
 

Converting string to date using REGEXP - Getting error 3798 - response (6) by CarlosAL

$
0
0

Hi.
The capturing group will make the expression fail (at least in TD 14, I haven't got any 15x at hand):
REGEXP_REPLACE('1925-3-1','\b([0-9])\b', '0\1', 1, 0, 'c') will give '1925-03-03'.
You can have the good result capturing the hyphen instead with a lookahead:
REGEXP_REPLACE('1925-3-1','(-)(?=[0-9](-|$))','\10',1,0,'c')
It will work with '1925-03-1', '1925-3-01' and '1925-03-01'.
HTH
Cheers.
Carlos.
 
 
 


Converting string to date using REGEXP - Getting error 3798 - response (7) by gpolanch

$
0
0

Yes, we are on 14.  Now it is working perfectly.  I should have thought to use the all-occurrences arg.   Thanks for your quick response Dieter, as I am under the gun to finish a project and processing/joining on dates is a big part of the remaining work.
-Greg
 

Converting string to date using REGEXP - Getting error 3798 - response (8) by gpolanch

$
0
0

Carlos,
Many thanks to you as well !  I am not familiar with lookaheads, but that sounds pretty interesting.  Will check that out when I get out of my current time crunch.
-Greg
 

Why Predicate Push Down mechanism doesn`t happen? - forum topic by Zolo000

$
0
0

Hello!

I`m interested in Predicate Push Down mechanism in Teradata.
Could you possibly explain why it doesn`t happen in example below and give a piece of advice how to fix it?
Thanks in advance.
Description of the example:
UAT_DM_CF.DM_CF_CARD_TURN - table with customer's salary information month-by-month. Primary Index: AGREEMENT_RK. Primary Key: AGREEMENT_RK, YEAR_MONTH. CUSTOMER_MDM_ID - customer identifier, that changes over time.

UAT_DM_CF.TECH_MDM_RELATIONSHIP - table with history of all relationships between "old" and "new" customer identifiers. This table are used to sync customer identifiers in UAT_DM_CF.DM_CF_CARD_TURN. Primary Index: PREVIOUS_CUSTOMER_ID. Primary Key: PREVIOUS_CUSTOMER_ID, EFFECTIVE_TO_DTTM.

UAT_VDM_CF.TECH_MDM_RELATIONSHIP_ACT - view with actual relationship of "old" and "new" customer identifiers (view).
It has the following structure:

REPLACE VIEW UAT_VDM_CF.TECH_MDM_RELATIONSHIP_ACT AS LOCKING ROW FOR ACCESS
   SELECT
        PREVIOUS_CUSTOMER_RK,
        PREVIOUS_CUSTOMER_ID,
        CUSTOMER_RK,
        CUSTOMER_ID
   FROM
        UAT_DM_CF.TECH_MDM_RELATIONSHIP
   WHERE
        EFFECTIVE_TO_DTTM = CAST('5999-12-31 00:00:00' AS TIMESTAMP(0))
        AND DELETED_FLG = '0';

UAT_VDM_CF.DM_CF_CARD_TURN - view with synchronized customer identifiers.
It has the following structure:

REPLACE VIEW UAT_VDM_CF.DM_CF_CARD_TURN AS LOCKING ROW FOR ACCESS
   SELECT
        crdt.AGREEMENT_RK,
        crdt.CONTRACT_ID,
        crdt.YEAR_MONTH,
        COALESCE(rel.CUSTOMER_ID, crdt.CUSTOMER_MDM_ID) as CUSTOMER_MDM_ID,
        crdt.CUSTOMER_MDM_ID as OLD_CUSTOMER_MDM_ID,
        crdt.CURRENCY_ISO_ID,
        crdt.SALARY_FLG,
        crdt.DEBET_TURN_AMT,
        crdt.DEBET_TURN_SALARY_AMT,
        crdt.CREDIT_TURN_AMT,
        crdt.BALANCE_AMT,
        crdt.EMPLOYER_TAX_PAYER_NUM,
        crdt.SOURCE_SYSTEM_CD,
        crdt.BUSINESS_DTTM,
        crdt.PROCESSED_DTTM,
        crdt.LAYER_ID,
        crdt.LOAD_ID
   FROM
        UAT_DM_CF.DM_CF_CARD_TURN as crdt
   LEFT JOIN
        UAT_VDM_CF.TECH_MDM_RELATIONSHIP_ACT as rel
   ON
        crdt.CUSTOMER_MDM_ID = rel.PREVIOUS_CUSTOMER_ID;

After executing two following queries we surprisingly got dramatic difference in response time:

select * from UAT_DM_CF.DM_CF_CARD_TURN where CUSTOMER_MDM_ID = '123'; --0.02 seconds
select * from UAT_VDM_CF.DM_CF_CARD_TURN where CUSTOMER_MDM_ID = '123'; --17 minutes and 42 seconds

The question is why in the second query the condition CUSTOMER_MDM_ID = '123' applies after dynamic hash join ?
Explain Texts:

Explain select * from UAT_DM_CF.DM_CF_CARD_TURN where CUSTOMER_MDM_ID = '123';

  1) First, we lock a distinct UAT_DM_CF."pseudo table" for read on a
     RowHash to prevent global deadlock for UAT_DM_CF.DM_CF_CARD_TURN.
  2) Next, we lock UAT_DM_CF.DM_CF_CARD_TURN for read.
  3) We do an all-AMPs RETRIEVE step from UAT_DM_CF.DM_CF_CARD_TURN by
     way of index # 4 "UAT_DM_CF.DM_CF_CARD_TURN.CUSTOMER_MDM_ID =
     '123'" with a residual condition of (
     "UAT_DM_CF.DM_CF_CARD_TURN.CUSTOMER_MDM_ID = '123'") into Spool 1
     (group_amps), which is built locally on the AMPs.  The size of
     Spool 1 is estimated with high confidence to be 43 rows (19,307
     bytes).  The estimated time for this step is 0.02 seconds.
  4) Finally, we send out an END TRANSACTION step to all AMPs involved
     in processing the request.
  -> The contents of Spool 1 are sent back to the user as the result of
     statement 1.
     No rows are returned to the user as the result of statement 2.
     The total estimated time is 0.02 seconds.
Explain select * from UAT_VDM_CF.DM_CF_CARD_TURN where CUSTOMER_MDM_ID = '123';

  1) First, we lock UAT_DM_CF.TECH_MDM_RELATIONSHIP in view
     UAT_VDM_CF.DM_CF_CARD_TURN for access, and we lock UAT_DM_CF.crdt
     in view UAT_VDM_CF.DM_CF_CARD_TURN for access.
  2) Next, we do an all-AMPs RETRIEVE step from a single partition of
     UAT_DM_CF.TECH_MDM_RELATIONSHIP in view UAT_VDM_CF.DM_CF_CARD_TURN
     with a condition of ("UAT_DM_CF.TECH_MDM_RELATIONSHIP in view
     UAT_VDM_CF.DM_CF_CARD_TURN.EFFECTIVE_TO_DTTM = TIMESTAMP
     '5999-12-31 00:00:00'") with a residual condition of (
     "(UAT_DM_CF.TECH_MDM_RELATIONSHIP.EFFECTIVE_TO_DTTM = TIMESTAMP
     '5999-12-31 00:00:00') AND (UAT_DM_CF.TECH_MDM_RELATIONSHIP in
     view UAT_VDM_CF.DM_CF_CARD_TURN.DELETED_FLG = '0')") into Spool 2
     (all_amps) (compressed columns allowed), which is duplicated on
     all AMPs.  The size of Spool 2 is estimated with high confidence
     to be 1,256,112 rows (199,721,808 bytes).  The estimated time for
     this step is 0.15 seconds.
  3) We do an all-AMPs JOIN step from Spool 2 (Last Use) by way of an
     all-rows scan, which is joined to UAT_DM_CF.crdt in view
     UAT_VDM_CF.DM_CF_CARD_TURN by way of an all-rows scan with no
     residual conditions.  Spool 2 and UAT_DM_CF.crdt are right outer
     joined using a dynamic hash join, with condition(s) used for
     non-matching on right table ("NOT (UAT_DM_CF.crdt.CUSTOMER_MDM_ID
     IS NULL)"), with a join condition of (
     "UAT_DM_CF.crdt.CUSTOMER_MDM_ID = PREVIOUS_CUSTOMER_ID").  The
     input table UAT_DM_CF.crdt will not be cached in memory.  The
     result goes into Spool 3 (all_amps) (compressed columns allowed),
     which is built locally on the AMPs.  The result spool file will
     not be cached in memory.  The size of Spool 3 is estimated with
     low confidence to be 1,207,507,023 rows (527,680,569,051 bytes).
     The estimated time for this step is 7 minutes and 20 seconds.
  4) We do an all-AMPs RETRIEVE step from Spool 3 (Last Use) by way of
     an all-rows scan with a condition of ("(( CASE WHEN (NOT
     (CUSTOMER_ID IS NULL )) THEN (CUSTOMER_ID) ELSE (CUSTOMER_MDM_ID)
     END ))= '123'") into Spool 1 (group_amps), which is built locally
     on the AMPs.  The result spool file will not be cached in memory.
     The size of Spool 1 is estimated with low confidence to be
     1,207,507,023 rows (624,281,130,891 bytes).  The estimated time
     for this step is 10 minutes and 21 seconds.
  5) Finally, we send out an END TRANSACTION step to all AMPs involved
     in processing the request.
  -> The contents of Spool 1 are sent back to the user as the result of
     statement 1.
     No rows are returned to the user as the result of statement 2.
     The total estimated time is 17 minutes and 42 seconds.

The tables have the following structure:

show table UAT_DM_CF.DM_CF_CARD_TURN;

CREATE MULTISET TABLE UAT_DM_CF.DM_CF_CARD_TURN ,NO FALLBACK ,
     NO BEFORE JOURNAL,
     NO AFTER JOURNAL,
     CHECKSUM = DEFAULT,
     DEFAULT MERGEBLOCKRATIO
     (
      CUSTOMER_MDM_ID VARCHAR(100) CHARACTER SET UNICODE CASESPECIFIC,
      CONTRACT_ID VARCHAR(50) CHARACTER SET UNICODE CASESPECIFIC,
      SOURCE_SYSTEM_CD VARCHAR(10) CHARACTER SET UNICODE CASESPECIFIC COMPRESS ('00006','00040','00018','00051','00000'),
      YEAR_MONTH DATE FORMAT 'yyyy-mm-dd',
      SALARY_FLG CHAR(1) CHARACTER SET UNICODE CASESPECIFIC COMPRESS ('0','1'),
      CURRENCY_ISO_ID VARCHAR(3) CHARACTER SET UNICODE CASESPECIFIC COMPRESS '810',
      DEBET_TURN_SALARY_AMT DECIMAL(23,5) COMPRESS 0.00000 ,
      CREDIT_TURN_AMT DECIMAL(23,5) COMPRESS 0.00000 ,
      DEBET_TURN_AMT DECIMAL(23,5) COMPRESS 0.00000 ,
      BALANCE_AMT DECIMAL(23,5) COMPRESS 0.00000 ,
      BUSINESS_DTTM TIMESTAMP(0) NOT NULL,
      PROCESSED_DTTM TIMESTAMP(0) NOT NULL,
      AGREEMENT_RK DECIMAL(18,0) NOT NULL,
      LAYER_ID INTEGER NOT NULL,
      LOAD_ID INTEGER NOT NULL,
      EMPLOYER_TAX_PAYER_NUM VARCHAR(200) CHARACTER SET UNICODE CASESPECIFIC COMPRESS )
PRIMARY INDEX ( AGREEMENT_RK )
PARTITION BY RANGE_N(YEAR_MONTH  BETWEEN DATE '2010-11-01' AND DATE '2020-12-01' EACH INTERVAL '1' MONTH ,
 NO RANGE)
INDEX ( CUSTOMER_MDM_ID );
show table UAT_DM_CF.TECH_MDM_RELATIONSHIP;

CREATE MULTISET TABLE UAT_DM_CF.TECH_MDM_RELATIONSHIP ,NO FALLBACK ,
     NO BEFORE JOURNAL,
     NO AFTER JOURNAL,
     CHECKSUM = DEFAULT,
     DEFAULT MERGEBLOCKRATIO
     (
      PREVIOUS_CUSTOMER_RK INTEGER NOT NULL,
      PREVIOUS_CUSTOMER_ID VARCHAR(100) CHARACTER SET UNICODE CASESPECIFIC NOT NULL,
      PREVIOUS_CUSTOMER_TYPE_CD CHAR(1) CHARACTER SET UNICODE CASESPECIFIC NOT NULL,
      CUSTOMER_RK INTEGER NOT NULL,
      CUSTOMER_ID VARCHAR(100) CHARACTER SET UNICODE CASESPECIFIC NOT NULL,
      CUSTOMER_TYPE_CD CHAR(1) CHARACTER SET UNICODE CASESPECIFIC NOT NULL,
      SOURCE_SYSTEM_CD VARCHAR(10) CHARACTER SET UNICODE CASESPECIFIC NOT NULL,
      FILE_ID INTEGER NOT NULL,
      PROCESSED_DTTM TIMESTAMP(0) NOT NULL,
      EFFECTIVE_FROM_DTTM TIMESTAMP(0) NOT NULL,
      EFFECTIVE_TO_DTTM TIMESTAMP(0) NOT NULL,
      LOAD_ID INTEGER NOT NULL,
      DELETED_FLG CHAR(1) CHARACTER SET UNICODE CASESPECIFIC NOT NULL,
      IS_ACTIVE_FLG CHAR(1) CHARACTER SET UNICODE CASESPECIFIC NOT NULL)
PRIMARY INDEX ( PREVIOUS_CUSTOMER_ID )
PARTITION BY RANGE_N(CAST((EFFECTIVE_TO_DTTM ) AS DATE AT TIME ZONE INTERVAL '3:00' HOUR TO MINUTE ) BETWEEN DATE '2010-01-01' AND DATE '2017-12-31' EACH INTERVAL '3' DAY ,
DATE '5999-12-31' AND DATE '5999-12-31' EACH INTERVAL '1' DAY ,
 NO RANGE)
INDEX ( PREVIOUS_CUSTOMER_RK );

The tables have the following statistics:

show statistics on UAT_DM_CF.DM_CF_CARD_TURN;

COLLECT STATISTICS
                   -- default SYSTEM SAMPLE PERCENT
                   -- default THRESHOLD 10 DAYS
                   -- default THRESHOLD 10.00 PERCENT
            COLUMN ( AGREEMENT_RK ) ,
            COLUMN ( CUSTOMER_MDM_ID )
                ON UAT_DM_CF.DM_CF_CARD_TURN ;

    Date        Time            Unique Value        Column Names        Column Dictionary Name        Column SQL Names    Column Names UEscape
1    16/03/01    14:54:56           1,207,507,022     *                    *                            "*"                 NULL
2    16/02/19    11:27:39              37,533,183     AGREEMENT_RK        AGREEMENT_RK                 AGREEMENT_RK         NULL
3    16/03/01    14:54:56              18,000,808     CUSTOMER_MDM_ID     CUSTOMER_MDM_ID              CUSTOMER_MDM_ID      NULL
show statistics on UAT_DM_CF.TECH_MDM_RELATIONSHIP;

COLLECT STATISTICS
                   -- default SYSTEM SAMPLE PERCENT
                   -- default THRESHOLD 10 DAYS
                   -- default THRESHOLD 10.00 PERCENT
            COLUMN ( PREVIOUS_CUSTOMER_ID ) ,
            COLUMN ( CUSTOMER_ID ) ,
            COLUMN ( CUSTOMER_RK ) ,
            COLUMN ( PREVIOUS_CUSTOMER_RK ) ,
            COLUMN ( EFFECTIVE_TO_DTTM,DELETED_FLG ) ,
            COLUMN ( EFFECTIVE_TO_DTTM ) ,
            COLUMN ( DELETED_FLG )
                ON UAT_DM_CF.TECH_MDM_RELATIONSHIP ;

      Date          Time      Unique Value      Column Names                      Column Dictionary Name          Column SQL Names          Column Names UEscape
1     16/03/01    14:58:19         16,250            *                               *                               "*"                             NULL
2     16/02/18    20:05:36         16,143            PREVIOUS_CUSTOMER_ID            PREVIOUS_CUSTOMER_ID            PREVIOUS_CUSTOMER_ID            NULL
3     16/02/18    20:04:54          8,001            CUSTOMER_ID                     CUSTOMER_ID                     CUSTOMER_ID                     NULL
4     16/02/18    20:04:55          8,001            CUSTOMER_RK                     CUSTOMER_RK                     CUSTOMER_RK                     NULL
5     16/02/18    20:05:38         16,143            PREVIOUS_CUSTOMER_RK            PREVIOUS_CUSTOMER_RK            PREVIOUS_CUSTOMER_RK            NULL
6     16/03/01    14:58:17             13            EFFECTIVE_TO_DTTM,DELETED_FLG   EFFECTIVE_TO_DTTM,DELETED_FLG   EFFECTIVE_TO_DTTM,DELETED_FLG   NULL
7     16/03/01    14:58:18             12            EFFECTIVE_TO_DTTM               EFFECTIVE_TO_DTTM               EFFECTIVE_TO_DTTM               NULL
8     16/03/01    14:58:19              2            DELETED_FLG                     DELETED_FLG                     DELETED_FLG                     NULL
Forums: 

oreplace of -> - response (7) by Fred

$
0
0

The character arguments to the functions must all be LATIN to avoid the implicit translation of all character values to Unicode (which will fail with 6706); note that literals are Unicode.
OREPLACE(latinColumn,CHR(26),CHR(32)) /* replace with a space */
OREPLACE(latinColumn,CHR(26),translate('' using Unicode_to_Latin)) /* replace with empty string, i.e. remove */

How do I get the max date row? - forum topic by tgutridge2

$
0
0

Data looks like this
Row Num    Loc Num    Loc Name         DTTM
1                 1234         Philadelphia       05/21/2016  23:58:00
2                 1234         Philadelphia       02/17/2015  23:58:00
3                 1234         Philadelphia       01/12/2013  23:58:00
4                 1234         Philadelphia       11/07/2001  00:00:00
 
In this example I need to write a statement to get only row 1 because it has the most recent DTTM.  It's a tempural table so since the location is closed my query looks like this:
 
NONSEQUENCED VALIDTIME
Select
*
FROM
LOC 
WHERE
Loc_Num = 1234
 
 

Forums: 

MERGE statement problem with IDENTITY field on TARGET Table - forum topic by antoniovaldes

$
0
0

Hi all,
 
 I am facing some challenges with the MERGE statement.  I am trying to move data from a source table to a destintion table. 
 These are the DDLs of my two tables:

CREATE VOLATILE TABLE ZZ_SOURCE (
        TEST_NUM VARCHAR(20) CHARACTER SET LATIN NOT CASESPECIFIC NOT NULL,
        TEST_COMMENT_TXT VARCHAR(1000) CHARACTER SET LATIN NOT CASESPECIFIC,
        TEST_COMMENT_TYPE_CD CHAR(2) CHARACTER SET LATIN NOT CASESPECIFIC NOT NULL
    ) UNIQUE PRIMARY INDEX ("TEST_NUM", "TEST_COMMENT_TYPE_CD") ON COMMIT PRESERVE ROWS;

and

CREATE SET TABLE ODS_TABLES_TEST.TARGET, NO FALLBACK ,
     NO BEFORE JOURNAL,
     NO AFTER JOURNAL,
     CHECKSUM = DEFAULT,
     DEFAULT MERGEBLOCKRATIO
     (
      TEST_COMMENT_ID INTEGER NOT NULL GENERATED BY DEFAULT AS IDENTITY
           (START WITH 1 
            INCREMENT BY 1 
            MINVALUE -2147483647 
            MAXVALUE 2147483647 
            NO CYCLE),
      TEST_NUM VARCHAR(50) CHARACTER SET LATIN NOT CASESPECIFIC NOT NULL,
      TEST_COMMENT_TYPE_CD CHAR(2) CHARACTER SET LATIN NOT CASESPECIFIC NOT NULL,
      TEST_COMMENT_TXT VARCHAR(1000) CHARACTER SET LATIN NOT CASESPECIFIC,
      UPDATE_INTERFACE_NM VARCHAR(50) CHARACTER SET LATIN NOT CASESPECIFIC NOT NULL,
      CREATE_DTTM TIMESTAMP(6) NOT NULL,
      UPDATE_DTTM TIMESTAMP(6))
PRIMARY INDEX ( TEST_COMMENT_ID );

So my 1st MERGE statement looked like this:

MERGE INTO "ODS_TABLES_TEST"."TARGET" tt
USING 
(
    SELECT 
        "TEST_NUM"
        ,"TEST_COMMENT_TYPE_CD"
        ,"TEST_COMMENT_TXT"
        ,'1048 TEST COMMENT' AS "UPDATE_INTERFACE_NM"
        ,CURRENT_TIMESTAMP AS "CREATE_DTTM"
        ,CURRENT_TIMESTAMP AS "UPDATE_DTTM"
    FROM
        "ZZ_SOURCE"
) ss 
ON (ss."TEST_NUM" = tt."TEST_NUM" AND ss."TEST_COMMENT_TYPE_CD" = tt."TEST_COMMENT_TYPE_CD")
WHEN MATCHED THEN UPDATE SET "TEST_COMMENT_TXT" = ss."TEST_COMMENT_TXT"
WHEN NOT MATCHED THEN INSERT ("TEST_NUM", "TEST_COMMENT_TYPE_CD", "TEST_COMMENT_TXT", "UPDATE_INTERFACE_NM", "CREATE_DTTM", "UPDATE_DTTM" ) 
                       values(ss."TEST_NUM", ss."TEST_COMMENT_TYPE_CD", ss."TEST_COMMENT_TXT", ss."UPDATE_INTERFACE_NM", ss."CREATE_DTTM", ss."UPDATE_DTTM");

and I got this error:
Failed [5758 : HY000] MyProcedure:The search condition must fully specify the Target table primary index and partition column(s) and expression must match INSERT specification primary index and partition column(s)
Because of this I modified my MERGE statement to look like this, to be able to use the TARGET primary index for the matching, but even this throws the same error

MERGE INTO "ODS_TABLES_TEST"."TARGET" tt
USING 
(
    SELECT
        t."TEST_COMMENT_ID"
        ,zz."TEST_NUM"
        ,zz."TEST_COMMENT_TYPE_CD"
        ,zz."TEST_COMMENT_TXT"
        ,'xxxxxxx xxxxx' AS "UPDATE_INTERFACE_NM"
        ,CURRENT_TIMESTAMP AS "CREATE_DTTM"
        ,CURRENT_TIMESTAMP AS "UPDATE_DTTM"
    FROM
        "ZZ_SOURCE" zz
        LEFT JOIN "ODS_TABLES_TEST"."TARGET" t ON
            zz."TEST_NUM" = t."TEST_NUM"
            AND zz."TEST_COMMENT_TYPE_CD" = t."TEST_COMMENT_TYPE_CD"
) ss 
ON (ss."TEST_COMMENT_ID" = tt."TEST_COMMENT_ID")
WHEN MATCHED THEN UPDATE SET "TEST_COMMENT_TXT" = ss."TEST_COMMENT_TXT"
WHEN NOT MATCHED THEN INSERT ( "TEST_NUM", "TEST_COMMENT_TYPE_CD", "TEST_COMMENT_TXT", "UPDATE_INTERFACE_NM", "CREATE_DTTM", "UPDATE_DTTM" ) 
                      values(ss."TEST_NUM", ss."TEST_COMMENT_TYPE_CD", ss."TEST_COMMENT_TXT", ss."UPDATE_INTERFACE_NM", ss."CREATE_DTTM", ss."UPDATE_DTTM");

 
Now if I change my target table DDL to be something like this: (Remove the identiy field and change the primary index)
 

CREATE SET TABLE ODS_TABLES_TEST.TARGET, NO FALLBACK ,
     NO BEFORE JOURNAL,
     NO AFTER JOURNAL,
     CHECKSUM = DEFAULT,
     DEFAULT MERGEBLOCKRATIO
     (
      TEST_NUM VARCHAR(50) CHARACTER SET LATIN NOT CASESPECIFIC NOT NULL,
      TEST_COMMENT_TYPE_CD CHAR(2) CHARACTER SET LATIN NOT CASESPECIFIC NOT NULL,
      TEST_COMMENT_TXT VARCHAR(1000) CHARACTER SET LATIN NOT CASESPECIFIC,
      UPDATE_INTERFACE_NM VARCHAR(50) CHARACTER SET LATIN NOT CASESPECIFIC NOT NULL,
      CREATE_DTTM TIMESTAMP(6) NOT NULL,
      UPDATE_DTTM TIMESTAMP(6))
PRIMARY INDEX ( TEST_NUM , TEST_COMMENT_TYPE_CD );

And execute my first merge statement it will work fine. 
Now I am not suppose to modify the TARGET DDL just because of this tiny issue :P 

How can I make the MERGE statement to work when having an identity column?
 

Forums: 

program to retrieve datbase table names - forum topic by atesting

$
0
0

Hi,
I am new to teradata, using following SQL query in Teradata SQL assist i am able to retrieve table names
SELECT TABLENAME, tablekind FROM DBC.TABLES WHERE DATABASENAME = 'DBC' AND TABLEKIND = 'T' ;
When i tried to pass the same SQL query in DBCHCL() function, i am just getting only Database table column related description. Not getting table list information, teradata response to SQL query always returns  zero bytes.
What is the proper order to invoke DBCHCL() with proper input options information set, is there is any code example exists to fetch the table list?
 
 

Forums: 

How do I get the max date row? - response (1) by sakthikrr

$
0
0

You can use qualify as shown below:

drop table loc;

create table LOC 
(
row_num integer,
Loc_Num integer,
Loc_Name varchar(50),
DTTM timestamp
);

insert into loc values (1,1234, 'Philadelphia', '2016-05-21 23:58:00');
insert into loc values (2,1234, 'Philadelphia', '2015-02-17 23:58:00');
insert into loc values (3,1234, 'Philadelphia', '2013-01-12 23:58:00');
insert into loc values (4,1234, 'Philadelphia', '2001-11-07 00:00:00');

SELECT *
FROM
LOC 
WHERE
LOC_NUM = 1234
QUALIFY (ROW_NUMBER() OVER(PARTITION BY LOC_NUM ORDER BY DTTM DESC)=1);

Hope this helps!

Query to remove some characters from a column - forum topic by sankalpk

$
0
0

Ex-
column contains values like- 
have your %[hidden_prod_class_desc]Q4_1% serviced by %[hidden_bo_name]Q3_1%
have your %[hidden_desc]Q14_1% serviced by %[hidden_bo_name]Q9_1%
what I need is-
have your [hidden_prod_class_desc] serviced by [hidden_bo_name]
have your [hidden_desc] serviced by [bo_name]

Tags: 
Forums: 

system-determined change threshold for Stats-- Teradata - forum topic by Chiraggorsia

$
0
0

Hi,
We are collecting stats on a table , however the stats collection is getting skipped.
On going for explain for the stats collect statement, we find something like:
We SKIP collecting STATISTICS for (‘ID1,ID2
‘), because the estimated data change of 9.77 % does not exceed
the system-determined change threshold of 50 %.
3) We SKIP collecting STATISTICS for (‘ID,ID1 ‘),
because the estimated data change of 9.65 % does not exceed the
system-determined change threshold of 50 %.
4) We SKIP collecting STATISTICS for (‘ID,ID2 ‘),
because the estimated data change of 9.65 % does not exceed the
system-determined change threshold of 50 %.
 
I believe the “system-determined change threshold” is defined in DBS control parameter.
Just wanted to know , how “system-determined change threshold” is different for different columns ?
 
Regards,
Chirag

Forums: 

Why Predicate Push Down mechanism doesn`t happen? - response (1) by dnoeth

$
0
0

The optmizer can't push this, because CUSTOMER_MDM_ID is a column in query #1, but the result of a calculation in #2:  COALESCE(rel.CUSTOMER_ID, crdt.CUSTOMER_MDM_ID)
 

oreplace of -> - response (8) by memostone

$
0
0

Hi Fred,
 
CHR(26) works, thank you!

Viewing all 14773 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>