Quantcast
Channel: Teradata Forums - Database
Viewing all 14773 articles
Browse latest View live

cast and concatenate - response (5) by Salokh

$
0
0

hi Dieter,
 
While running the below query, am getting the error like "SELECT Failed 3775, Invalid Hexadecimal Constant" .
 
sel * from dbc.databasespace where tableid= '00-00-1C-00-00-00'xb
Can you please help me out , how to retrive byte data ?
Thnaks,
Salokh


cast and concatenate - response (6) by dnoeth

$
0
0

Hi Salokh,
remove the dashes from the hex string:

sel * from dbc.databasespace where tableid= '00001C000000'xb

 

nice tool - forum topic by dixitjagi

drop stat not working in TD 14.10 - response (1) by desai51

$
0
0

Hi,
I am facing the same problem.  "Drop Statistics on tablename;" gives me  invalid query.  Note that I am trying to drop statistics on a volatile table.  TD version 14.10.  
Thanks

Transpose columns to row in a single query - response (3) by Raja_KT

$
0
0

You can try with select tdstats.udfconcat(trim(a)) from tbl and union all with a select  tdstats.udfconcat(trim(b)). It will come in one column with quotes and comma. You can  do regexp_replace for quotes and then split values delimited by ,

 

 

How to limit rows to only those where there are duplicates of a specific column? - response (10) by lijianguonew

$
0
0

select t1.acct,t1.id,t1.code,t2.acct,t2.id,t2.code,t2.row_idnew,t1.row_id,t2.row_idnew-t1.row_id aa,
min(t1.code) over(partition by t2.row_idnew-t1.row_id)
from
(select acct,id,code,row_number() over(order by code) 
from avt
where id<>24386
) t1(acct,id,code,row_id)
inner join
(select acct,id,code,row_number() over(order by code) 
from avt
qualify id<>24386
) t2(acct,id,code,row_idnew)
on t1.acct=t2.acct
and t1.code=t2.code
order by t1.code

Determine new table skew for a different PI - forum topic by StevenSchmid

$
0
0

Hi
I was wondering if there was a way to calculate the skew of an existing table with a different choice of PI.  I am aware of the hashing functions, such that the following query will show the distribution of rows across the amps based on the new PI, however with a a system with hundreds of AMPs, it would be nice to determine the skew factor value:
SELECT 
  hashamp(hashbucket(hashrow( new_PI_column_list ))) as ampnum
  ,count(*)
from <database>.<tablename>
group by 1
order by 2
Cheers
Steven

Forums: 

Determine new table skew for a different PI - response (1) by Raja_KT

$
0
0

I am not aware of any dictionary table. However, if it is me, I will write a script which will loop through a table, reading fields and (also composite if need be) and then redirect the output to a file preferably and not table. In this way, an automation script can read for all databases and tables required and provide outputs for all fields or composite  fields if required.


REGEXP_REPLACE HELP String fist letter uppercase rest lowercase! - forum topic by PeterSchwennesen

$
0
0

I need to convert all words in a string to first letter uppercase rest lowercase, while skipping some specific words, and remove multiple spaces.
I like to convert: “ teST   is a TEst” to “Test is a Test”
This remove multiple spaces if found the

REEXP_REPLACE('  this    IS     a     Test  ', '( )+','',1,0,'c')

Were working, but the rest I have no idea how to do and looking around, on the net did not give much help either.
I need to do something like this: 

SELECT REGEXP_REPLACE(c.Str,'A,IS','a,is',1,0,'i') AS str -- This replaces single word> A to a, Is to is
  FROM (
SELECT REGEXP_REPLACE(b.Str,'[^a-z]','[^A-Z]',1,0,'c') AS str -- This should set first letter to Upper and rest to lover case>: This Is A Test
  FROM (
SELECT TRIM(REGEXP_REPLACE('  this    IS     a     Test  ', '( )+','',1,0,'c')) AS str -- removes multiple Spaces and trim
       ) AS b
       ) AS c

Or if possible then all this combined in one REGEXP would also be nice.
Peter Schwennesen

Forums: 

REGEXP_REPLACE HELP String fist letter uppercase rest lowercase! - response (1) by Raja_KT

$
0
0

I see I like to convert: “ teST   is a TEst” to “Test is a Test”. The last word again starts with upper case.
I put an initcap to the result you got above. Then I replace, Is with is and A with a.
select 
regexp_replace(regexp_replace(initcap(REGEXP_REPLACE('  teST    IS     a     TEst  ', '( )+','',1,0,'c')) ,'Is','is',1,1,'i'),'A','a',1,1,'i)
 

REGEXP_REPLACE HELP String fist letter uppercase rest lowercase! - response (2) by PeterSchwennesen

$
0
0

Hi Raja
The intcap function is super.
Are there other function slike this new in 14?
Peter Schwennesen

Varchar to Timestamp(6) conversion - response (1) by dnoeth

$
0
0

Hi Orlando,
what's your TD release?
In TD14 you might utilize Oracle's TO_TIMESTAMP, which is a bit more flexilbe regarding single digit day/hour/minute/second, but month still needs to be two digits. But quoted characters like "\" seem to be allowed only in TO_CHAR.
So add the missing zero, remove the backslash and pass it to TO_TIMESTAMP:
 

TO_TIMESTAMP(OREPLACE(CASE WHEN x LIKE '_/%' THEN '0' ELSE '' END || x, '\',''), 'mm/dd/yyyy hh:mi:ss AM')

 

Determine status on a given date - response (5) by dnoeth

$
0
0

Instead of a join you might use EXPAND on to create the missing dates, see
http://forums.teradata.com/forum/general/creating-missing-observations

REGEXP_REPLACE HELP String fist letter uppercase rest lowercase! - response (3) by Raja_KT

$
0
0

Many oracle   functions are available in Teradata 14. You can check it out. In TD 15, you will be amazed to know about various array, varray functions like oracle, aggregate functions,arithmetic,trigo, hyperbolic functions, array, varray functions like oracle, built-in functions,calendar functions,lob functions ......

Determine new table skew for a different PI - response (2) by dnoeth

$
0
0

Hi Steven,
how do you define the skew factor?
I use this for calculating the percent deviation from average:

SELECT
   HASHAMP(HASHBUCKET(HASHROW(col))) AS vproc,
   COUNT(*) AS cnt,
   100 * (cnt - AVG(cnt) OVER ()) / AVG(cnt) OVER () (DEC(8,2)) AS deviation
FROM tab
GROUP BY 1

And based on the count per AMP you can do the skew calclation:

SELECT
   SUM(cnt) AS RowCount 
   ,MAX(SkewedAMP) AS SkewedAMP
   -- skew factor, 1 = even distribution, 1.1 = max AMP needs 10% more space than the average AMP
   ,MAX(cnt) / NULLIF(AVG(cnt),0) (DEC(5,2)) AS SkewFactor
   -- skew factor, between 0 and 99.  Same calculation as WinDDI/ TD Administrator
   ,(100 - (AVG(cnt) / NULLIF(MAX(cnt),0) * 100)) (DEC(3,0)) AS SkewFactor_WINDDI
FROM
 (
   SELECT
      HASHAMP(HASHBUCKET(HASHROW(col))) AS vproc,
      COUNT(*) AS cnt,
      100 * (cnt - AVG(cnt) OVER ()) / AVG(cnt) OVER () (DEC(8,2)) AS deviation,
      CASE WHEN cnt = MAX(cnt) OVER () THEN vproc END AS SkewedAMP 
   FROM tab
   GROUP BY 1
 ) AS dt

And for big tables you might better use a SAMPLE of a few percent instead of aggregating all rows


Partition effectiveness - response (3) by dnoeth

$
0
0

Hi Robin,
you defined 416.741 partitions and even if there a PARTITION stats the optimizer must still include all possible partitions in the plan.
You should limit the number of partitions, e.g.

PARTITION BY RANGE_N(EFF_END_DT  BETWEEN DATE '2012-12-31' AND DATE '2030-12-31'EACH INTERVAL '7' DAY , DATE '9999-12-31' AND DATE '9999-12-31' EACH INTERVAL '1' DAY, NO RANGE, UNKNOWN)

 

The NO RANGE will also be included in the scan, this doesn't matter if the number of rows is small.

 

You might also include the max date in NO RANGE:

 

PARTITION BY RANGE_N(EFF_END_DT  BETWEEN DATE '2012-12-31' AND DATE '2030-12-31' EACH INTERVAL '7' DAY ,NO RANGE, UNKNOWN)

 

As a side effect you'll save a lot of disk space (6 bytes per rows * number of indices) as your current schema results in 8 byte partition numbers and reducing to less than 64K partitions only needs 2 bytes.

swap values in two columns using update SQL - response (3) by dnoeth

$
0
0

Hi Moutusi,
it's that easy :-)
Unless you try it on MySQL, which will screw up the data to
A B
2 2
4 4
6 6

Determine new table skew for a different PI - response (3) by VandeBergB

$
0
0

and this works too...:)
select
sum(tallyset.rowtally)
,min(tallyset.rowtally)
,max(tallyset.rowtally)
,avg(tallyset.rowtally)
,100 - (avg(tallyset.rowtally)/max(tallyset.rowtally) *100) as skewfactor
from
(select 
            hashamp(hashbucket(hashrow(<candidate columns>))) as hashedamp
            ,count(*) as rowtally
from 
            <dbname.tablename>
group by 1) as tallyset

Error: LOBs are not allowed to be hashed - forum topic by tmcrouse

$
0
0

I did Google this and it talks about data lengths. We have 4 columns in one table that are 1500 varchar because the end users say they use EXCEL for logging updates on items and they need as much space as they can get. However, when I put the table into Access, I limited it to 1500 and am thinking that is too big for Teradata. So I actually removed those columns all together for the time being and refreshed my Tableau connection. I am still getting this error. The only other table that has a 1500 varchar is my program table because the program description for interventions are lengthy. The max length I found is 1333 if I remember correctly. The rest of the tables are normal. Is this telling me that I have to now tell the program managers there is a maximum number of characters that are allowed and say it is 500 and we are going to have to figure out a way to alter the descriptions of the programs to not exceed 500 or whatever the maximum can be?
Is there any articles out there that tell me how I can figure out maximum lengths. Not sure if it means maximum column length or maximum over all table lenth when added together or maximum lengths when you add all tables together. I thought I read somewhere something about 65000 but not sure what that was referring to.

Forums: 

Determine new table skew for a different PI - response (4) by krishaneesh

$
0
0

Here is a different approach in case this helps you. 
For a given PI of a table the skew factor is based on the count of the values of the records across the amps. Just determine it based on the unique value.  SEL PI COLUMNS, count(*) from <databasename>.<tablename> group by PI columns. usually if the count(*) is not distributed evenly then the table is skewed i.e., if the count of a particular value is in the order of some thousands and the minimum of another PI value is in the order of few, then the table is heavily skewed.
Take the new PI columns you wanted to check for, and check the distribution as explained above. if it is even distribution of the counts across the various values, then you can consider it for changing it to  the new PI.

Viewing all 14773 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>