I would say it is not a limitation of Teradata, as Teradata can handle trillions of rows.
Rather it can be a limitation of SQL Assistant, ODBC and Network resources.
When you import files using SQL Assistant, the DML statement is executed once for each record. So for such a large file executing 10M rows means executing this statement 10M times, obviously the system resources will exhausted.
Further the SQL Assistant import is good for small files import, for large files you must use fastload, or multiload, else you can use TPT load OR update operators.
I would say it is not a limitation of Teradata, as Teradata can handle trillions of rows.
Rather it can be a limitation of SQL Assistant, ODBC and Network resources.
When you import files using SQL Assistant, the DML statement is executed once for each record. So for such a large file executing 10M rows means executing this statement 10M times, obviously the system resources will exhausted.
Further the SQL Assistant import is good for small files import, for large files you must use fastload, or multiload, else you can use TPT load OR update operators.