Stats collection inside SP's - response (13) by dnoeth
Stats collection inside SP's - response (14) by barani_sachin
thanks again for your timely replies :) Could pls redirect/explain me to a link where i can get a good definition/difference for
SECURITY OWNER, CREATOR, DEFINER, INVOKER
Stats collection inside SP's - response (15) by dnoeth
The Stored Procedues and the DDL manual?
Dieter
Complex Transactions in Stored Procedures - forum topic by rtefft
We perform ETL on a table of transactions. After loading the transaction table, we want to use Stored Procedures (SPs) to perform the very complex business validations, and then to apply the transactions against a dozen permanent state tables. The nature of the data requires sequential processing of the transactions since some may be dependent on others or affect the results of others. Fortunately the data volume is very small.
As part of this effort, a small web app is being built to enable some manual cleanup of transactions which fail validation. We want it to call the same stored procedures to validate and apply the changes that the ETL does. This will ensure consistency and prevent duplicate coding. Validation and processing of a single trasnaction will span multiple procedures, and we need to have complete control over the rollback/commit activity. ANSI mode appears to offer this, but does anyone have any experience or suggestions on doing this? Several times I have seen "do not use ANSI mode" when calling procedures (especially from .Net) so I am hoping for some guidance. We can't afford to waste weeks of effort only to find out this approach isn't feasible.
Any thoughts would be appreciated.
-Rich
RODBC system error 193 - forum topic by bkj123
Good afternoon.
I've been connecting Teradata and RStudio with RODBC on a windowsXP (32 bit) laptop for a while. Example code is at the bottom of this post.
I am not able to run this code on a Windows 7 64 bit laptop. I receive a message of "[RODBC] ERROR: state IM003, code 160, message Specified driver could not be loaded due to system error 193: (Teradata, C:\Program Files (x86)\Teradata\Client\13.10\ODBC Driver for Teradata\Lib\tdata32.dll)."
As far as I can see there are a couple of differences between the 2 setups:
1. WindowsXP is 32 bit with 32 bit drivers and the Windows 7 is 64 bit with 32 bit drivers.
2. The DSN (e.g. "_Prod") on the XP laptop is a system DSN. The Win7 laptop uses a user dsn since I don't have administrator rights on this laptop.
Any suggestions of how to resolve? Can I go an alternative route like JDBC? Thank you - Brian
library(RODBC)
myconn <-odbcConnect("_Prod",uid="xxxxx",pwd="zzzzz")
sqlStr <- "Select * from databasex.tablez;"
sqlQuery(myconn, sqlStr, believeNRows = FALSE);
Sybase To Teradata..... - response (2) by gopaltera
Great,we are using TPT(teradata parallel transporter ) to exports the data to file.
RODBC system error 193 - response (1) by ulrich
try
http://forums.teradata.com/forum/analytics/connecting-to-teradata-in-r-via-the-teradatar-package
You need to download the JDBC driver and set the correct path.
Ulrich
Multi-Value compression has increased the table size - response (21) by amit.saxena0782
Hi Dieter,
I proposed a MVC to my client for Teradata 12 Tables. As per the analysis, I got around 800 GB approx saving on 2.3 TB of tables giving table level and column level savings. After too much of investigation on MVC, Client has come up with a concern as below:
Concern: Some of these columns are derived from bases that can change e.g. pricing strategies, cost price changes, tax (vat).
If any of these bases change the profile of the data in the tables will change, which means that a totally new set of ‘ideal’ compression values would apply.
How often would the compression values be reviewed?
As per my understanding , If the column values are more volatile for derived columns then we do not suggest applying the compression
But if columns values are more duplicate and static then apply the compress to save the space. But on the whole , I am still confused that even if the columns are derived, but I somehow still got the savings for that table, around 30%-40 %.
Can you please advise , if there is a way, we can apply compression on tables with some some/all derived columns, as i can see much saving ..
Regards,
Amit
Temporal usage classic scenerio - forum topic by Qamar.Shahbaz
Hi
I need to track history of party addresses and created a temporal table for this.
CREATE MULTISET TABLE Employee_Addr_Hist (
Name varchar(100),
City varchar (100),
VT PERIOD(DATE) NOT NULL
)
PRIMARY INDEX(name);
insert into Employee_Addr_Hist (name,city, Validity) values ( 'John','London' ,period( date '2011-01-01', until_changed) );
Now today(2013-03-19) I received a new row from source saying 'John' moved to 'Paris' on '2012-01-01'.
If I use below Update, CURRENT_DATE is used to close previous record and open new. Which is wrong as John moved to Paris on 2012-01-01.
update Employee_Addr_Hist
set City = 'Paris'
where name = 'John'
and end(VT) is until_changed
Can anyone help me in this? I just need to use source date for closing old record and opening new using Temporal features.
Temporal usage classic scenerio - response (1) by KS42982
You can add your SET statement like below -
SET VT = PERIOD(BEGIN(VT), source_date_column)
Performance considerations during taking back up-- - forum topic by Nishant.Bhardwaj
Hi Experts,
Need Your suggestions among the 2 possible scenerio through which we can take the backupof a Huge table at production.
Scenerio 1-
Create table a_bkp as A with Data and Stats;
Scenerio 2-
First Create empty table as create table A_bkp as A with no data .
second Use the merge statement to copy the data from main table to _bkp table
Merge statement in place of normal INsert select ...
like -
Merge into _bkp
Using A
I had a discussion with one of my peer and he suggested to go with scernio 2 not with scenerio 1 as it scenerio 1 will run into spool issues at the production as table is really huge and MERGE dont take any Spool to process the record how ever Insert Select does.
As i am bit unsure ,Just wanted to check this up Experts..
Thanks in advance.
Nishant
How to retrieve the Relative Record Number? - response (9) by wicik
Hi there...
I have very similar problem like friend from the very start of the post.
I'm pretty nobbish in this type of SQL so please try not to be rough for me :)
Well.. I have to convert numeric field to date, than count some upload data and group by table.
Numeric_column like 20121121133549 which is probably like YYYYMMDDHHMMSS
Other columns are acces_point_type and data_uplink.
My goal is to convert numeric to Date (without time HHMMSS), count data_uplink and group by acces_point_type.
Simple group by would not be that problem but whole idea of convert data and connect it with casted date format and group by this, is pretty dark magic for me.
Any help would be appreciated.
PS: Sorry for my terrible English :/
How to retrieve the Relative Record Number? - response (10) by wicik
Well...
Sounds stupid but I have helped myself with default select --> copy paste to excel --> sorting and numeric conversion to date via left and right functions (to split needed numbers and connecting it as a date) and by makeing a PivotChart from all of it.
It gave me pretty the same but not that pro as it should.
Still, any help would be welcome :)
I need to learn how to do it properly.
Regards
Performance considerations during taking back up-- - response (1) by dnoeth
Hi Nishant,
if i had to rank the different scenarios it would be:
#1: first scenario
#2: create table as existing_table with no data plus ins/sel
#3: second scenario
- "create table as existing_table" is simply copying datablocks without any spool
- INS/SEL is also not using any spool if source and target have exactly the sams DDL
- MERGE might be more efficient than INS/SEL, but definitely not in this case
Dieter
How to retrieve the Relative Record Number? - response (11) by KS42982
If you do not care about the HHMISS (last 6 digits of your numeric_column) then you can do the same that you did in excel by using SUBSTR function and fetch only first 8 and assign the name to it and use that to group by.
Performance considerations during taking back up-- - response (2) by KS42982
One thing I'd like to mention about scenario # 1 ( create table as existing table with data) that if your existing table has few PIs, SIs, partitions then teradata would not create the same in the new table (it will give the defaul PI to the new table) and that may cause the delay in inserting the records, especially when you have lot of records to insert. So just make sure of that.
Performance considerations during taking back up-- - response (3) by dnoeth
"CREATE TABLE AS existing_table" creates an *exact* copy of the existing_table including PI/SI/Partitioning, etc.
There are only two things which are not copied: Foreign Keys and Triggers.
But of course you're correct if it's "CREATE TABLE AS (SELECT...)"
Dieter
Performance considerations during taking back up-- - response (4) by KS42982
You are right, I meant, CREATE TABLE AS (SELECT..) only.
I never tried "CREATE TABLE AS existing_table" but it's good to know that it copies all the indexes, will use that from next time.
spool usage and spool space - forum topic by nyemul
Hi
If few SQL queries are run and highest spool usage value among all queries is obtained from DBQLOGTBL
Then does this mean that minimum spool should be set to highest spool usage value among all queries?
For example: query 1 shows spool usage as 10GB, query 2 shows spool usage as 40 GB and query 3 shows spool usage as 25 GB.
Does this mean that minimum spool space allocated should be at least 40 GB?
Niteen
SQL Syntax - forum topic by AsherShah
I have a view:
Create view VW1 as SEL col1, col2, col3 from tab1;
and I can run following SEL.
SEL a.col1, a.col2, a.col3, b.col4 from
( sel col1, col2, col3 from VW1 where col2 = 'ABC') a
left outer join
(SEL col1, Col3 as col4 from VW1 where col2 = 'ABC'
qualify row_number() over ( order by col1) = 1
) b
on a.col1 = b.col1;
I need to change above SEL to a View so that user can access with simple query like SEL * from VW2 where col2 = 'ABC'
Any suggestions?
Thanks
Asher
I prefer a naming convention for VTs (like VT_tab) so there's never any chance that a permanent table with the same name exists :-)
Dieter