Javascript required
Skip to content Skip to sidebar Skip to footer

Aws-sdk-s3 Ruby Upload Bad File Descriptor @ Io_fread

Q1. Ascertain Data Stage? Ans: A data phase is basically a tool that is used to design, develop and execute diverse applications to fill multiple tables in data warehouse or data marts. It is a programme for Windows servers that extracts data from databases and modify them into information warehouses. It has become an essential part of IBM WebSphere Data Integration suite. Q2. Explain how a source file is populated? Ans: Nosotros can populate a source file in many ways such as past creating a SQL query in Oracle, or  by using row generator excerpt tool etc. Q3. Proper name the command line functions to import and export the DS jobs? Ans: To import the DS jobs, dsimport.exe is used and to export the DS jobs, dsexport.exe is used. Q4. What is the difference between Datastage 7.5 and 7.0? Ans: In Datastage 7.5 many new stages are added for more robustness and smooth performance, such equally Procedure Stage, Command Phase, Generate Report etc. Q5. In Datastage, how yous tin gear up the truncated data error? Ans: The truncated data error can exist fixed past using Environment VARIABLE ' IMPORT_REJECT_STRING_FIELD_OVERRUN'. Q6. Define Merge? Ans: Merge means to join two or more tables. The 2 tables are joined on the ground of Primary key columns in both the tables. Q7. Differentiate between data file and descriptor file? Ans: Every bit the name implies, data files contains the information and the descriptor file contains the clarification/data near the data in the information files. Q8. Differentiate between datastage and informatica? Ans: In datastage, there is a concept of partition, parallelism for node configuration. While, at that place is no concept of sectionalization and parallelism in informatica for node configuration. Also, Informatica is more scalable than Datastage. Datastage is more user-friendly equally compared to Informatica. Q9. Define Routines and their types? Ans: Routines are basically collection of functions that is defined by DS manager. It can be called via transformer stage. In that location are three types of routines such as, parallel routines, main frame routines and server routines. Q10. How tin can you lot write parallel routines in datastage PX? Ans: We can write parallel routines in C or C++ compiler. Such routines are also created in DS director and tin can be chosen from transformer phase. Q11. What is the method of removing duplicates, without the remove duplicate phase? Ans: Duplicates can be removed by using Sort stage. We tin can employ the choice, as let duplicate = faux. Q12. What steps should be taken to ameliorate Datastage jobs? Ans: In order to meliorate performance of Datastage jobs, we take to first establish the baselines. Secondly, we should non employ only i flow for functioning testing. Thirdly, we should work in increase. Then, we should evaluate information skews. Then we should isolate and solve the problems, one by one. Later on that, we should distribute the file systems to remove bottlenecks, if any. Also, we should non include RDBMS in start of testing stage. Last simply non the least, we should understand and assess the available tuning knobs. Q13. Differentiate between Join, Merge and Lookup stage? Ans: All the three concepts are different from each other in the way they use the memory storage, compare input requirements and how they care for various records. Join and Merge needs less memory as compared to the Lookup stage. Q14. Explain Quality stage? Ans: Quality phase is also known as Integrity stage. It assists in integrating different types of data from various sources. Q15. Define Job control? Ans: Job control can be best performed by using Chore Command Linguistic communication (JCL). This tool is used to execute multiple jobs simultaneously, without using any kind of loop. Q16. Differentiate between Symmetric Multiprocessing and Massive Parallel Processing? Ans: In Symmetric Multiprocessing, the hardware resources are shared past processor. The processor has one operating system and it communicates through shared retentiveness. While in Massive Parallel processing, the processor access the hardware resource exclusively. This type of processing is also known as Shared Nothing, since nothing is shared in this. It is faster than the Symmetric Multiprocessing. Q17. What are the steps required to kill the chore in Datastage? Ans: To impale the chore in Datasatge, nosotros accept to kill the respective processing ID. Q18. Differentiate betwixt validated and Compiled in the Datastage? Ans: In Datastage, validating a job means, executing a job. While validating, the Datastage engine verifies whether all the required backdrop are provided or not. In other case, while compiling a chore, the Datastage engine verifies that whether all the given properties are valid or not. Q19. How to manage date conversion in Datastage? Ans: Nosotros tin use engagement conversion role for this purpose i.eastward. Oconv(Iconv(Filedname,"Existing Date Format"),"Another Appointment Format"). Q20. Why do we use exception activity in Datastage? Ans: All the stages later the exception action in Datastage are executed in case of whatsoever unknown mistake occurs while executing the job sequencer. Q21. Define APT_CONFIG in Datastage? Ans: It is the environment variable that is used to identify the *.apt file in Datastage. It is also used to store the node data, disk storage information and scratch information. Q22. Name the different types of Lookups in Datastage? Ans: There are 2 types of Lookups in Datastage i.e. Normal lkp and Thin lkp. In Normal lkp, the data is saved in the memory offset and so the lookup is performed. In Sparse lkp, the data is directly saved in the database. Therefore, the Sparse lkp is faster than the Normal lkp. Q23. How a server job can exist converted to a parallel task? Ans: We can catechumen a server task in to a parallel task by using IPC stage and Link Collector. Q24. Define Repository tables in Datastage? Ans: In Datastage, the Repository is some other proper name for a data warehouse. It tin can be centralized as well as distributed. Q25. Ascertain OConv () and IConv () functions in Datastage? Ans: In Datastage, OConv () and IConv() functions are used to convert formats from one format to another i.eastward. conversions of roman numbers, time, engagement, radix, numeral ASCII etc. IConv () is basically used to convert formats for system to sympathise. While, OConv () is used to convert formats for users to sympathize. Q26. Explicate Usage Assay in Datastage? Ans: In Datastage, Usage Analysis is performed within few clicks. Launch Datastage Manager and right click the chore. Then, select Usage Analysis and that'south information technology. Q27. How do you lot find the number of rows in a sequential file? Ans: To notice rows in sequential file, we can utilize the System variable @INROWNUM. Q28. Differentiate between Hash file and Sequential file? Ans: The only departure between the Hash file and Sequential file is that the Hash file saves data on hash algorithm and on a hash central value, while sequential file doesn't take any central value to salvage the information. Basis on this hash key feature, searching in Hash file is faster than in sequential file. Q29. How to clean the Datastage repository? Ans: We tin can clean the Datastage repository past using the Clean Upwards Resources functionality in the Datastage Director. Q30. How a routine is called in Datastage job? Ans: In Datastage, routines are of two types i.eastward. Earlier Sub Routines and Later Sub Routines. We tin can call a routine from the transformer phase in Datastage. Q31. Differentiate between Operational Datastage (ODS) and Data warehouse? Ans: We can say, ODS is a mini data warehouse. An ODS doesn't contain data for more than i year while a data warehouse contains detailed information regarding the unabridged concern. Q32. NLS stands for what in Datastage? Ans: NLS means National Language Support. It tin be used to incorporate other languages such as French, German, and Spanish etc. in the data, required for processing by data warehouse. These languages have same scripts as English linguistic communication. Q33. Tin can yous explain how could anyone drib the index earlier loading the data in target in Datastage? Ans: In Datastage, we can drop the index before loading the data in target by using the Directly Load functionality of SQL Loaded Utility. Q34. How can one implement the slowly changing dimensions in Datastage? Ans: Slowly changing dimensions is non a concept related to Datastage. Datastage is used for ETL purpose and not for slowly irresolute dimensions. Q35. How can one notice bugs in job sequence? Ans: We can find bugs in job sequence by using DataStage Director. Q36. How circuitous jobs are implemented in Datstage to improve performance? Ans: In order to better functioning in Datastage, it is recommended, non to use more than than 20 stages in every chore. If you lot demand to utilise more than 20 stages and so it is better to use some other chore for those stages. Q37. Name the 3rd party tools that can be used in Datastage? Ans: The third party tools that tin be used in Datastage, are Autosys, TNG and Event Co-ordinator. I have worked with these tools and possess hands on experience of working with these tertiary party tools. Q38. Define Project in Datastage? Ans: Whenever we launch the Datastage customer, we are asked to connect to a Datastage projection. A Datastage project contains Datastage jobs, built-in components and Datastage Designer or User-Defined components. Q39. How many types of hash files are there? Ans: At that place are 2 types of hash files in DataStage i.east. Static Hash File and Dynamic Hash File. The static hash file is used when limited amount of data is to exist loaded in the target database. The dynamic hash file is used when we don't know the corporeality of data from the source file. Q40. Define Meta Stage? Ans: In Datastage, MetaStage is used to save metadata that is helpful for data lineage and information assay. Q41. Take you have always worked in UNIX environment and why information technology is useful in Datastage? Ans: Yes, I have worked in UNIX environment. This knowledge is useful in Datastage because sometimes i has to write UNIX programs such as batch programs to invoke batch processing etc. Q42. Differentiate between Datastage and Datastage TX? Ans: Datastage is a tool from ETL (Excerpt, Transform and Load) and Datastage TX is a tool from EAI (Enterprise Application Integration). Q43. What is size of a transaction and an assortment means in a Datastage? Ans: Transaction size means the number of row written before committing the records in a table. An array size means the number of rows written/read to or from the table respectively. Q44. How many types of views are there in a Datastage Director? Ans: There are three types of views in a Datastage Director i.e. Job View, Log View and Status View. Q45. Why nosotros utilize surrogate key? Ans: In Datastage, nosotros use Surrogate Key instead of unique key. Surrogate key is mostly used for retrieving data faster. It uses Index to perform the retrieval operation. DataStage Interview QuestionsDataStage Interview Questions and Answers Q46. How rejected rows are managed in Datastage? Ans: In the Datastage, the rejected rows are managed through constraints in transformer. Nosotros can either place the rejected rows in the properties of a transformer or we can create a temporary storage for rejected rows with the aid of REJECTED control. Q47. Differentiate betwixt ODBC and DRS stage? Ans: DRS stage is faster than the ODBC phase because information technology uses native databases for connectivity. Q48. Define Orabulk and BCP stages? Ans: Orabulk phase is used to load large amount of information in one target table of Oracle database. The BCP stage is used to load large amount of data in one target table of Microsoft SQL Server. Q49. Define DS Designer? Ans: The DS Designer is used to blueprint work area and add diverse links to it. Q50. Why do we use Link Partitioner and Link Collector in Datastage? Ans: In Datastage, Link Partitioner is used to split up data into different parts through sure division methods. Link Collector is used to gather data from diverse partitions/segments to a single data and save it in the target table. More questions Q51. How did y'all handle refuse information? Ans: Typically a Reject-link is divers and the rejected data is loaded back into information warehouse. So Reject link has to exist defined every Output link y'all wish to collect rejected information. Rejected data is typically bad information like duplicates of Primary keys or naught-rows where information is expected. Q52. If worked with DS6.0 and latest versions what are Link-Partitioner and Link-Collector used for? Ans: Link Partitioner - Used for partition the information. Link Collector - Used for collecting the partitioned information. Q53. What are Routines and where/how are they written and have you written any routines before? Ans: Routines are stored in the Routines branch of the DataStage Repository, where you tin can create, view or edit. The following are dissimilar types of routines: 1) Transform functions 2) Before-after job subroutines 3) Task Control routines Q54. What are OConv () and Iconv () functions and where are they used? Ans: IConv() - Converts a string to an internal storage format OConv() - Converts an expression to an output format. Q55. How did yous connect to DB2 in your last project? Ans: Using DB2 ODBC drivers. Q56. Explain METASTAGE? Ans: MetaStage is used to handle the Metadata which will be very useful for data lineage and data analysis later on. Meta Information defines the type of data we are handling. This Data Definitions are stored in repository and tin be accessed with the use of MetaStage. Q57. Do you know about INTEGRITY/QUALITY stage? Ans: Qulaity Stage tin be integrated with DataStage, In Quality Stage we have many stages like investigate, match, survivorship like that so that we can practice the Quality related works and we can integrate with datastage we need Quality phase plugin to accomplish the task. Q58. Explain the differences between Oracle8i/9i? Ans: Oracle 8i does not support pseudo cavalcade sysdate but 9i supports Oracle 8i nosotros tin can create 256 columns in a table but in 9i we can upto grand columns(fields) Q59. How practise you merge two files in DS? Ans: Either use Copy control as a Earlier-job subroutine if the metadata of the two files are aforementioned or create a chore to concatenate the 2 files into one if the metadata is different. Q60. What is DS Designer used for? Ans: You use the Designer to build jobs by creating a visual pattern that models the flow and transformation of information from the data source through to the target warehouse. The Designer graphical interface lets y'all select phase icons, drop them onto the Designer work surface area, and add links. Q61.What is DS Administrator used for? Ans: The Ambassador enables you to set DataStage users, command the purging of the Repository, and, if National Linguistic communication Support (NLS) is enabled, install and manage maps and locales. Q62. What is DS Director used for? Ans: datastage director is used to run the jobs and validate the jobs. nosotros can go to datastage director from datastage designer information technology cocky. Q63.What is DS Managing director used for? Ans: The Manager is a graphical tool that enables you to view and manage the contents of the DataStage Repository Q64. What are Static Hash files and Dynamic Hash files? Ans: Equally the names itself suggest what they mean. In full general we use Blazon-30 dynamic Hash files. The Data file has a default size of 2Gb and the overflow file is used if the data exceeds the 2GB size. Q65. What is Hash file stage and what is information technology used for? Ans: Used for Look-ups. It is like a reference table. It is also used in-place of ODBC, OCI tables for better performance. Q66. How are the Dimension tables designed? Ans: Find where data for this dimension are located. Effigy out how to extract this data. Determine how to maintain changes to this dimension. Modify fact table and DW population routines. Q67. Does the selection of 'Clear the tabular array and Insert rows' in the ODBC stage send a Truncate statement to the DB or does it do some kind of Delete logic. Ans: There is no TRUNCATE on ODBC stages. It is Clear table blah blah and that is a delete from statement. On an OCI phase such equally Oracle, you lot do have both Clear and Truncate options. They are radically different in permissions (Truncate requires you to have change table permissions where Delete doesn't). Q68. Tell me one situation from your concluding project, where you had faced problem and How did you solve it? Ans: The jobs in which data is read directly from OCI stages are running extremely slow. I had to phase the data before sending to the transformer to make the jobs run faster. B. The job aborts in the heart of loading some 500,000 rows. Have an pick either cleaning/deleting the loaded data and then run the fixed job or run the job over again from the row the job has aborted. To make sure the load is proper we opted the erstwhile. Q69. Why do nosotros have to load the dimensional tables first, then fact tables: Ans: Equally nosotros load the dimensional tables the keys (primary) are generated and these keys (primary) are Foreign keys in Fact tables. Q70. How volition yous determine the sequence of jobs to load into information warehouse? Ans: Starting time we execute the jobs that load the data into Dimension tables, then Fact tables, then load the Aggregator tables (if any). Q71. What are the control line functions that import and export the DS jobs? Ans: A. dsimport.exe- imports the DataStage components. B. dsexport.exe- exports the DataStage components. Q72. What is the utility you lot apply to schedule the jobs on a UNIX server other than using Ascential Director? Ans: Apply crontab utility along with dsexecute() function along with proper parameters passed. Q73. How would telephone call an external Coffee function which are non supported by DataStage? Ans: Starting from DS half dozen.0 we have the ability to call external Coffee functions using a Coffee packet from Ascential. In this case nosotros can even use the command line to invoke the Java function and write the return values from the Coffee programme (if any) and use that files as a source in DataStage task. Q74. What volition you in a state of affairs where somebody wants to send you a file and utilize that file as an input or reference and then run job. Ans: A. Under Windows: Use the 'WaitForFileActivity' under the Sequencers and then run the job. May be y'all tin schedule the sequencer around the time the file is expected to get in. B. Under UNIX: Poll for the file. Once the file has showtime the chore or sequencer depending on the file. Q75. Read the String functions in DS Ans: Functions like -> sub-string function and ':' -> chain operator Syntax: string length ] string Q76. How did you connect with DB2 in your final project? Ans: Most of the times the information was sent to u.s. in the form of flat files. The data is dumped and sent to united states of america. In some cases were we demand to connect to DB2 for look-ups every bit an case and so nosotros used ODBC drivers to connect to DB2 (or) DB2-UDB depending the situation and availability. Certainly DB2-UDB is better in terms of functioning as y'all know the native drivers are e'er better than ODBC drivers. 'iSeries Access ODBC Commuter nine.00.02.02' - ODBC drivers to connect to AS400/DB2. Q77. What are Sequencers? Ans: Sequencers are chore control programs that execute other jobs with preset Job parameters. Q78. Differentiate Principal Cardinal and Partition Key? Ans: Primary Key is a combination of unique and non null. Information technology tin be a drove of key values called every bit blended chief fundamental. Segmentation Central is a just a part of Main Key Q79. How did y'all handle an 'Aborted' sequencer? Ans: In well-nigh all cases we have to delete the data inserted by this from DB manually and fix the chore and then run the job again. Q80. What versions of DS yous worked with? Ans: DS 7.0.ii/6.0/5.two Q81. If worked with DS6.0 and latest versions what are Link-Partitioner and Link-Collector used for? Ans: Link Partitioner - Used for partition the data.Link Collector - Used for collecting the partitioned information. Q82. How do y'all rename all of the jobs to support your new File-naming conventions? Ans: Create a Excel spreadsheet with new and erstwhile names. Export the whole project as a dsx. Write a Perl program, which can practise a simple rename of the strings looking up the Excel file. Q83. Explain the types of Parallel Processing? Ans: Parallel Processing is broadly classified into two types. a) SMP - Symmetrical Multi Processing. b) MPP - Massive Parallel Processing. Q84. Does the option of 'Clear the table and Insert rows' in the ODBC stage ship a Truncate statement to the DB or does it do some kind of Delete logic. Ans: At that place is no TRUNCATE on ODBC stages. It is Articulate table blah apathetic and that is a delete from argument. On an OCI stage such every bit Oracle, you lot exercise have both Clear and Truncate options. Q85. When should nosotros apply ODS? Ans: DWH'south are typically read only, batch updated on a scheduleODS's are maintained in more real time, trickle fed constantly Q86. What is the default cache size? How practise you change the enshroud size if needed? Ans: Default enshroud size is 256 MB. We can incraese it by going into Datastage Administrator and selecting the Tunable Tab and specify the cache size over there. Q87. What are the types of Parallel Processing? Ans: Parallel Processing is broadly classified into 2 types. a) SMP - Symmetrical Multi Processing. b) MPP - Massive Parallel Processing. Q88. How to handle Date convertions in Datastage ? Catechumen a mm/dd/yyyy format to yyyy-dd-mm? Ans: We use a) "Iconv" function - Internal Convertion. b) "Oconv" function - External Convertion. Part to convert mm/dd/yyyy format to yyyy-dd-mm is Oconv(Iconv(Filedname,"D/1000 Q89. Differentiate Primary Primal and Sectionalization Key? Ans: Primary Key is a combination of unique and not zip. It can be a collection of key values called as composite primary key. Partition Key is a just a part of Master Central. Q90. Is it possible to summate a hash total for an EBCDIC file and have the hash full stored every bit EBCDIC using Datastage ? Ans: Currently, the full is converted to ASCII, even tho the private records are stored every bit EBCDIC. Q91. How do you lot merge two files in DS? Ans: Either used Copy command equally a Before-job subroutine if the metadata of the two files are aforementioned or created a chore to concatenate the 2 files into 1 if the metadata is different. Q92. How did y'all connect to DB2 in your terminal projection? Ans: Using DB2 ODBC drivers. Q93. What is the default enshroud size? How practise you change the cache size if needed? Ans: Default cache size is 256 MB. We tin can incraese it by going into Datastage Ambassador and selecting the Tunable Tab and specify the cache size over in that location. Q94. What are Sequencers? Ans: Sequencers are job control programs that execute other jobs with preset Task parameters. Q95. How do you execute Datastage job from command line prompt? Ans: Using "dsjob" command as follows. dsjob -run -jobstatus projectname jobname Q96. How exercise you rename all of the jobs to support your new File-naming conventions? Ans: Create a Excel spreadsheet with new and old names. Consign the whole project every bit a dsx. Write a Perl program, which tin do a simple rename of the strings looking up the Excel file. Then import the new dsx file probably into a new project for testing. Recompile all jobs. Be cautious that the name of the jobs has also been changed in your job control jobs or Sequencer jobs. And so you accept to brand the necessary changes to these Sequencers. contact for more on DataStage Online Training datastage interview questions

scarfsperady50.blogspot.com

Source: https://www.kitsonlinetrainings.com/interview-question/datastage-interview-questions