Quantcast
Channel: Teradata Forums - Database
Viewing all 14773 articles
Browse latest View live

Help! WITH RECURSIVE does not return desired result.. - response (4) by Arparmar

$
0
0

Hi Ashish,
YES!! As per my knowledge ,we can't create view on top of Recursive view.
so better to create a temp table from the recursive  view and then create a new view  on Top of the temp table . may be it will help you.
 
 


How to calculate a character occurance in a string in teradata - response (7) by Arparmar

$
0
0
sel char_length (trim(val)),
 otranslate( val,'/!@#*$%^&',''),
 case when 
 char_length (trim(val))-char_length
 (trim (otranslate (val ,'/!@#*$%^&','')))>=6 then  substr( val , 0, instr (otranslate (val ,'/!@#*$%^&','/'),'/',1,6))
 else val end as res
 from  regexp

hi Depook,
we can use multiple symbol latters in Otranslate function for the search in a Column  like REGEXP_replcae function
 please fine above sql. hope it will help you.
 
Regards
Arparmar

adding hours in timestamp returns wrong results in Teradata 15 but works fine in Terdata 14 - forum topic by Peterpannj

$
0
0

I have below string which returns different results in version 14 and 15. Version 14 gives correct results but version 15 is off by 5 hours.
 
Teradata 14

SELECT CAST(CAST(CURRENT_TIMESTAMP  AS CHAR(19)) ||'-01:00' AS TIMESTAMP) ;

Result: 2016-02-26 14:38:56.000000
 
Teradata 15 
Result: 2016-02-26 09:41:12.000000

Forums: 

Performance Data - response (1) by Fred

$
0
0

Similar information is available. See the "Introduction to Teradata" manual for an overview and some links, and ask here if there is something specific you are looking for and can't find.
 
In particular, you could review the PMPC Open API table functions (Application Programming Reference manual), and the Teradata Viewpoint monitoring  functionality, which collects snapshots of PMPC API and other information and displays it in a variety of ways. I'd also recommend checking out Data Base Query Logging (DBQL) functionality and perhaps ResUsage (Database Administration manual).

Need help to implement the logic - response (12) by srivigneshkn

$
0
0

Hi,
I have tried the solution with a self join on the table with tt_id and grouping the max priority id with the greatest one using qualify.

I tested for the below input set with different scenarios and it worked.

 

Input Set :

 

tt_id           priority_id

1,201           1

 1,201            2

1,201            3

1,201           4

1,201           5

1,202            1

1,203            1

1,203           3

1,204            2

1,204            3

 

Query :

 

Select a.tt_id,b.priority_id as StartPriority,a.priority_id as EndPriority from odm_europe_t.priority a 

inner join odm_europe_t.priority b on (a.tt_id=b.tt_id and a.priority_id>b.priority_id) 

qualify row_number() over(partition by a.tt_id order by b.priority_id,a.priority_id desc)=1

order by a.tt_id;

 

Output :

TTID    StartPriority   EndPriority

1,201     1                     5

1,203     1                     3

1,204     2                     3

 

 

Please check and tell if it works for you.

 

Thanks & Regards,

Srivignesh KN

Tactical Queries - response (3) by Fred

$
0
0

If you are looking at ResUsageSpma, did you check ProcBlks*/ProcWait* columns?
Flow control is possible even if you have CPU available. Did you check ResUsageSawt? Do you throttle the concurrency of non-tactical work?
 
How many sessions are sending SP calls to the database concurrently? Just one? Do sessions stay logged on for many SP calls, or do you log on a new session for each call?

How to calculate a character occurance in a string in teradata - response (11) by srivigneshkn

$
0
0

Hi Deepok/Jitu,

 

I have replicated your scenario and tried out a solution and am able to get the desired output.

 

1. I have created a table (tmp_substr) with 2 columns (Str1, Str2) Str1 holds the values of your first scenario (with / as delimiter) and Str2 holds value for your second scenario (/ replaced with ~!)

 

Str1                                    Str2

1/2/3/4/5                          1~!2~!3~!4~!5

1/2/3/4/5/6                       1~!2~!3~!4~!5~!6

1/2/3/4/5/6/7/8                 1~!2~!3~!4~!5~!6~!7~!8

1/2/3/4/5/6/7/8/9/10         1~!2~!3~!4~!5~!6~!7~!8~!9~!10

 

2. First Scenario (Col Str1) - (Handling only /)

 

I used regular expressions to solve this, first i find the 6th occurence of / using regexp_instr , if there are less than 6 occurence the whole string is displayed, if there are more than or equal to six occurence then the position until the 6th occurence is substringed.

 

Query for First Scenario :  (Col Str1) - (Handling only /)

 

select substr(str1,0, case when regexp_instr(str1,'/',1,6,1,'c') =0 then  length(str1)+1   else regexp_instr(str1,'/',1,6,1,'c')-1 end) as Output from tmp_substr ; 

 

Output : 

Str1                           Output

1/2/3/4/5                     1/2/3/4/5

1/2/3/4/5/6                  1/2/3/4/5/6

1/2/3/4/5/6/7/8            1/2/3/4/5/6

1/2/3/4/5/6/7/8/9/10    1/2/3/4/5/6

 

3. Second Scenario :  (Col Str2) - (Replacing ~! to / then  Handling /)

 

Here we replace ~! to / using regexp_replace and then compute the same function as in step 2.

 

Query for Second Scenario :  (Col Str2) - (Replacing ~! to / then  Handling /)

 

select str2,substr(regexp_replace(str2,'(["~!"]{2})','/',1,0,'i'),0,case when regexp_instr(regexp_replace(str2,'(["~!"]{2})','/',1,0,'i'),'/',1,6,1,'c') =0 then  length(regexp_replace(str2,'(["~!"]{2})','/',1,0,'i'))+1 else  regexp_instr(regexp_replace(str2,'(["~!"]{2})','/',1,0,'i'),'/',1,6,1,'c')-1 end) from tmp_substr ;

 

Output : 

 

Str2                                                        Output

1~!2~!3~!4~!5                                      1/2/3/4/5

1~!2~!3~!4~!5~!6                                1/2/3/4/5/6

1~!2~!3~!4~!5~!6~!7~!8                    1/2/3/4/5/6

1~!2~!3~!4~!5~!6~!7~!8~!9~!10       1/2/3/4/5/6

 

You can use the query 2 for handling your scenario (replace str2 with your column name and tmp_substr with your tablename), please check and tell if this worked for you.

 

Thanks & Regards,

Srivignesh KN

Generate a Alphanumeric ID - response (4) by ajaypratap


Help! WITH RECURSIVE does not return desired result.. - response (5) by ashish.jn.in

$
0
0

Thanks Arun. I will try your suggestion. 

Add minutes(another column) to time column - forum topic by ajaypratap

$
0
0

Hi Folks,
I want to add certain amount of minute to a time column. I am trying to implement this using interval.
Sel TIME_COLUMN + INTERVAL ‘45’ MINUTE works fine.
But I want to replace ‘45’ with another column(TIM_MIN) but I am getting error 3707.
Could you guys suggest alternate way to implement.

Regards,
Ajay

Forums: 

Teradata Fast Export with Macro - forum topic by prsbrs

$
0
0

Hi,
Need your experts help: Iam new to TD and exploring options to do a manual task in a more efficient way:
 
I have a table (2M records) based on customer ID (1000 ID's). I need to somehow get 1000 csv files out based on customer Id. Is there a way to do it ?
 
Please help.
 
Thanks,Prs

Forums: 

Add minutes(another column) to time column - response (1) by Fred

$
0
0

SELECT TIME_COLUMN + TIM_MIN * INTERVAL '1' MINUTE

adding hours in timestamp returns wrong results in Teradata 15 but works fine in Terdata 14 - response (1) by Fred

$
0
0

Your session time zone offset is different (look at HELP SESSION), likely due to differences in how dbscontrol and/or tdlocaledef were configured.

Performance Data - response (2) by imaadesi

$
0
0

Thanks, I have looked at Intro, DBQL, Viewpoint and ResUsage info but perhaps in not much detail and I would look more into these. Also perhaps API function that you mentioned would be helpful. 
What I am interested in is basic info for a DBA i.e. who is doing what or running what SQL in database and if those sessions have any bottlenecks e.g. locked, waits, high i/o, high CPU etc. I can do this in Oracle without any tool by writing simple SQLs, Are there some tables/view available in Teradata to get this info?

BIGINT on Single table join index - forum topic by spongebob

$
0
0

Hi,
I created a single TABLE join index ON my table_1 WITH the same PI AS the PI OF table_2 so that I can avoid the redistribution.
table_1: 1.5 billion ROWS
table_2: 4 million ROWS
I could NOT CREATE a STJI ON the smaller TABLE because it does NOT have the UPI COLUMN OF the larger TABLE.

Scenario 1: IF I include a BIGINT COLUMN ON my STJI THEN look AT the EXPLAIN plan, the STJI IS NOT used.
Scenario 2: IF I REPLACE the BIGINT COLUMN WITH another COLUMN (SMALLINT OR VARCHAR OR CHAR datatypes), the STJI IS used AS I see it IN the EXPLAIN plan.
My Question IS, why does the optimizer choose NOT TO USE the STJI ON scenario 1?
STJI: The join index fully covers the SQL statement.
CREATE join index test_db.STJI_table1
AS
SELECT col1
       , col2
       , col3
       , col4
       , col5
       , col6 --bigint, if I replace this with other columns with smaller data type and replace the same column on the SQL statement then the STJI will be used by the optimizer
FROM test_db.table1 a
PRIMARY INDEX (col1);

Forums: 

Performance Data - response (3) by Fred

$
0
0

DBQL (urequest level) and ResUsage (system level) views provide such capabilities after the fact, and the Open API table functions are used to obtain snapshots in real time, all via SQL.
There are no predefined views for the API functions. It is a bit more efficient to filter the output by passing specific arguments, but you could certainly define views that call the functions with wild-card arguments and filter the output in a WHERE clause.

unknown data in table - forum topic by wxs@qq.com

$
0
0

We found some curious cases in TD 14.10. The DB has 3 active nodes and 1 HSN. DBA switched one active node and HSN in Feb, after swithced:
1. In view point, it shows one node down. but the query and ETL job can, should be not actually down. If it's issue of view point?
 
2. After several days laer, the daily loading table size ( refer to dbc.tablesize ) is larger day by day, e.g., day is 6G, day 2 is 7 G..., but we ensure the real data size is almost the same in these days. This just happened in some table and some AMPs. Then encountered a DB auto restart later, DBA perform all table scan, then the size become normal again. Why this happened?
 
3. We checked log file in /var/log/sa, found the number of packets per second reduced by 80%. If the switching will change packet size?

Forums: 

Alternative for Multiple Left outer join - forum topic by drmkd17

$
0
0
Select coalesce (t1.col3,p1.col3) as target_Column1,
coalesce (t2.col3,p2.col3) as target_Column2,
coalesce (t3.col3,p3.col3) as target_Column3,
...
coalesce (t25.col3,p25.col3) as target_Column3,
from 
d1.g1_view a
join 
d1.g1_metadate b
on 
b.cd='some_cd'
left outer join
d1.val_table t1
on 
t1.val_cd = a.some_ind
and t1.schm_Cd ='abc'
left outer join 
d1.schm_tab p1
on 
p1.schm_cd ='abc'

left outer join 
d1.val_table t2
t1.val_cd = a.some_other_ind
and t1.schm_Cd ='pqr'
d1.schm_tab p2
on 
p1.schm_cd ='pqr'
...
left outer join 
d1.val_table t25
t1.val_cd = a.some_other_other_ind
and t1.schm_Cd ='xyz'
d1.schm_tab p25
on 
p1.schm_cd ='xyz'

Hi All,
 
I have a view running for a very long time. Can anybody suggest me alternative for left outer joins to val table and the schm_tab. Thanks in advance.
 
Thanks,
DrmKd
 
 

Forums: 

How to convert UTC millisecond to Timestamp(6) - response (4) by PS186010

$
0
0

Hi,
Is there a possibility to create a mirror image of this function? I'm trying to have current_timestamp represented as INTEGER/BIGINT. The point of the game is to have a roughly unique process id (uniqueness within 1 second would be OK, anything more detailed would be perfect).
I was experimenting with HASHBUCKET(HASHROW(CURRENT_TIMESTAMP)) but it turned out that with bad luck you can run into troubles.
Yours,
Piotr

BTEQ hangs for large BTEQ file - response (1) by VinnyVally

$
0
0

Hello,

Sad to see that I haven't had a single response for this :( I'm still having issues regarding this and have recently got a bteq file of size 81,000+, which is not possible to cut down below 61,440.

I've been looking online once again and come across '.show controls' and I was wondering if possibly the 'maximum byte count' or 'multiple maximum byte count' may be causing the issue. However from the '.show controls', the values appear to be set high enough for this not to be an issue, regardless I'd like to double it just incase. Sadly I cannot find any way of setting this parameter to be higher.

Any help is very much appreciated as I'd really like to get this issue sorted.

Cheers.

Viewing all 14773 articles
Browse latest View live


Latest Images

<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>