Your job will write its output to a fixed length data file. When configuring the sequential file stage as a target what format and column tab properties need to be considered for this type of file output?
A. On the Output Link format tab, change the 'Delimiter' property to whitespace.
B. On the Output Link format tab, add the 'Record Type' property to the tree and set its value to be 'F'.
C. On the Output Link column tab, insure that all the defined column data types are fixed length types.
D. On the Output Link column tab, specify the record size total based on all of the columns defined
Identify the two statements that are true about the functionality of the XML Pack 3.0. (Choose two.)
A. XML Stages are Plug-in stages.
B. XML Stage can be found in the Database folder on the palette.
C. Uses a unique custom GUI interface called the Assembly Editor.
D. It includes the XML Input, XML Output, and XML Transformer stages.
E. A single XML Stage, which can be used as a source, target, or transformation.
Identify two items that are created as a result of running a Balanced Optimization on a job that accesses a Hadoop distributed file system as a source? (Choose two.)
A. A JAQL stage is found in the optimized job result.
B. A Big Data File stage is found in the optimized job results.
C. A Balanced Optimization parameter set is found in the project
D. A Balanced Optimization Shared Container is found in the project.
E. A MapReduce Transformer stage is found in the optimized job result.
Which statement is correct about the Data Rules stage?
A. The Data Rules stage works with rule definitions only; not executable rules.
B. As a best practice, you should create and publish new rules from the Data Rules stage.
C. If you have Rule Creator role in InfoSphere Information Analyzer, you can create and publish rule definitions and rule set definitions directly from the stage itself.
D. When a job that uses the Data Rules stage runs, the output of the stage is passed to the downstream stages and results are stored in the Analysis Results database (IADB).
Which job design technique can be used to give unique names to sequential output files that are used in multi-instance jobs?
A. Use parameters to identify file names.
B. Generate unique file names by using a macro.
C. Use DSJoblnvocationID to generate a unique filename.
D. Use a Transformer stage variable to generate the name.
The ODBC stage can handle which two SQL Server data types? (Choose two.)
A. Date
B. Time
C. GUID
D. Datetime
E. SmallDateTime
In a Transformer expression for a stage variable, there is a nullable input column. Assume the legacy NULL processing option is turned off. What happens when a row is processed that contains NULL in that input column?
A. The job aborts.
B. The row is rejected.
C. NULL is written to the stage variable.
D. The value written to the stage variable is undetermined.
The number of File Set data files created depends upon what two items? (Choose two.)
A. Amount of memory.
B. Schema definition of the file.
C. Operating system limitations.
D. Number of logical processing nodes.
E. Number of disks in the export or default disk pool connected to each processing node in the default node pool.
What are the valid join operations for the Join stage? (Choose two.)
A. Inner join
B. Where join
C. Top outer join
D. Right outer join
E. Bottom inner join
What two project environment variables can be considered in your parallel jobs to support your optimization strategy of partitioning and sorting? (Choose two.)
A. $APT_NO_PART_INSERTION
B. $APT_OPT_SORT_INSERTION
C. $APT_RESTRICT_SORT_USAGE
D. $APT_PARTITION_FLUSH_COUNT
E. $APT_TSORT_STRESS_BLOCKSIZE
Which statement is true about improving job performance when using Balanced Optimization?
A. Convert a job to use bulk staging tables for Big Data File stages.
B. Balance optimization attempts to balance the work between the source server, target sever, and the job.
C. If the job contains an Aggregator stage, data reduction stages will be pushed into a target data server by default.
D. To ensure that a particular stage can only be pushed into a source or target connector, you can set the Stage Affinity property to source or target.
You would like to run a particular processing job within a job sequence for each weekday. What two methods could be used? (Choose two.)
A. Set the frequency property in the job scheduler to weekdays only.
B. Add the job scheduler stage to the job sequence and set to weekdays only.
C. Call a routine in the job sequencer that starts the processing job for each day you would like to process.
D. Have a parameter set that contains the days of the week you would like to process and routine to parse the days of the week contained in the "day" parameter.
E. Start Loop and End Loop activity stages on the job sequencer canvas where you loop through the days and pass a value for each day into the job via parameter.
You have made a copy of your job and made major changes to a job in your project. You now want to identify all the changes that have been made. What task will allow you to identify these changes?
A. Export the original job to a backup directory.
B. Export the modified job to the backup directory.
C. Select the job, then right click Compare against.
D. Select the job, then right click Cross Project Compare.
You are experiencing performance issues for a given job. You are assigned the task of understanding what is happening at run time for that job. What step should you take to understand the job performance issues?
A. Replace Join stages by Lookup stages.
B. Run the job with $APT_TRACE_RUN set to true.
C. Run the job with $APT_DUMP_SCORE set to true.
D. Replace Transformer stages with custom operators.
Identify two restructure stages that allow you to create or organize vectors in the output link results? (Choose two.)
A. Split Vector
B. Column Import
C. Split Subrecord
D. Merge records
E. Make Subrecord