SlideShare una empresa de Scribd logo
1 de 363
Descargar para leer sin conexión
file:///C|/Perl/bin/result.html 
INFORMATICA INTERVIEW QUESTIONS - Extracted from 
GeekInterview.com by Deepak Babu 
http://prdeepakbabu.wordpress.com 
DISCALIMER: The questions / data available here are from geekinterview.com. It has been compiled to single document for the ease 
of browsing through the informatica relevant questions. For any details, please refer www.geekinterview.com. We are not 
responsible for any data inaccuracy. 
1.Informatica - Why we use lookup transformations? 
QUESTION #1 Lookup Transformations can access data from relational tables 
that are not sources in mapping. With Lookup transformation, we can 
accomplish the following tasks: 
Get a related value-Get the Employee Name from Employee table based on the Employee 
IDPerform Calculation. 
Update slowly changing dimension tables - We can use unconnected lookup transformation to 
determine whether the records already exist in the target or not. 
Click Here to view complete document 
No best answer available. Please pick the good answer available or submit your answer. 
January 19, 2006 01:12:33 #1 
sithusithu Member Since: December 2005 Contribution: 161 
RE: Why we use lookup transformations? 
======================================= 
Nice Question If we don't have a look our datawarehouse will be have more unwanted duplicates 
Use a Lookup transformation in your mapping to look up data in a relational table view or synonym. 
Import a lookup definition from any relational database to which both the Informatica Client and Server 
can connect. You can use multiple Lookup transformations in a mapping 
Cheers 
Sithu 
file:///C|/Perl/bin/result.html (1 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html 
======================================= 
Lookup Transformations used to search data from relational tables/FLAT Files that are not used in 
mapping. 
Types of Lookup: 
1. Connected Lookup 
2. UnConnected Lookup 
======================================= 
The main use of lookup is to get a related value either from a relational sources or flat files 
======================================= 
The following reasons for using lookups..... 
1)We use Lookup transformations that query the largest amounts of data to 
improve overall performance. By doing that we can reduce the number of lookups 
on the same table. 
2)If a mapping contains Lookup transformations we will enable lookup caching 
if this option is not enabled . 
We will use a persistent cache to improve performance of the lookup whenever 
possible. 
We will explore the possibility of using concurrent caches to improve session 
performance. 
We will use the Lookup SQL Override option to add a WHERE clause to the default 
SQL statement if it is not defined 
We will add ORDER BY clause in lookup SQL statement if there is no order by 
defined. 
We will use SQL override to suppress the default ORDER BY statement and enter an 
override ORDER BY with fewer columns. Indexing the Lookup Table 
We can improve performance for the following types of lookups: 
For cached lookups we will index the lookup table using the columns in the 
lookup ORDER BY statement. 
For Un-cached lookups we will Index the lookup table using the columns in the 
lookup where condition. 
file:///C|/Perl/bin/result.html (2 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html 
3)In some cases we use lookup instead of Joiner as lookup is faster than 
joiner in some cases when lookup contains the master data only. 
4)This lookup helps in terms of performance tuning of the mappings also. 
======================================= 
Look up Transformation is like a set of Reference for the traget table.For example suppose you are 
travelling by an auto ricksha..In the morning you notice that the auto driver showing you some card and 
saying that today onwards there is a hike in petrol.so you have to pay more. So the card which he is 
showing is a set of reference for there costumer..In the same way the lookup transformation works. 
These are of 2 types : 
a) Connected Lookup 
b) Un-connected lookup 
Connected lookup is connected in a single pipeline from a source to a target where as Un Connected 
Lookup is isolated with in the mapping and is called with the help of a Expression Transformation. 
======================================= 
Look up tranformations are used to 
Get a related value 
Updating slowly changing dimension 
Caluculating expressions 
======================================= 
2.Informatica - While importing the relational source definition 
from database, what are the meta data of source U i 
QUESTION #2 Source name 
Database location 
Column names 
Datatypes 
Key constraints 
Click Here to view complete document 
No best answer available. Please pick the good answer available or submit your answer. 
file:///C|/Perl/bin/result.html (3 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html 
September 28, 2006 06:30:08 #1 
srinvas vadlakonda 
RE: While importing the relational source defintion fr... 
======================================= 
source name data types key constraints database location 
======================================= 
Relational sources are tables views synonyms. Source name Database location Column name Datatype 
Key Constraints. For synonyms you will have to manually create the constraints. 
======================================= 
3.Informatica - How many ways you can update a relational 
source defintion and what r they? 
QUESTION #3 Two ways 
1. Edit the definition 
2. Reimport the defintion 
Click Here to view complete document 
No best answer available. Please pick the good answer available or submit your answer. 
January 30, 2006 04:59:06 #1 
gazulas Member Since: January 2006 Contribution: 17 
RE: How many ways you can update a relational source d... 
======================================= 
in 2 ways we can do it 
1) by reimport the source definition 
2) by edit the source definition 
======================================= 
4.Informatica - Where should U place the flat file to import the 
file:///C|/Perl/bin/result.html (4 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html 
flat file defintion to the designer? 
QUESTION #4 Place it in local folder 
Click Here to view complete document 
No best answer available. Please pick the good answer available or submit your answer. 
December 13, 2005 08:42:59 #1 
rishi 
RE: Where should U place the flat file to import the f... 
======================================= 
There is no such restrication to place the source file. In performance point of view its better to place the 
file in server local src folder. if you need path please check the server properties availble at workflow 
manager. 
It doesn't mean we should not place in any other folder if we place in server src folder by default src 
will be selected at time session creation. 
======================================= 
file must be in a directory local to the client machine. 
======================================= 
Basically the flat file should be stored in the src folder in the informatica server folder. 
Now logically it should pick up the file from any location but it gives an error of invalid identifier or 
not able to read the first row. 
So its better to keep the file in the src folder.which is already created when the informatica is installed 
======================================= 
We can place source file any where in network but it will consume more time to fetch data from source 
file but if the source file is present on server srcfile then it will fetch data from source upto 25 times 
faster than previous. 
======================================= 
5.Informatica - To provide support for Mainframes source data, 
which files r used as a source definitions? 
QUESTION #5 COBOL files 
Click Here to view complete document 
file:///C|/Perl/bin/result.html (5 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html 
No best answer available. Please pick the good answer available or submit your answer. 
October 07, 2005 11:49:42 #1 
Shaks Krishnamurthy 
RE: To provide support for Mainframes source data,whic... 
======================================= 
COBOL Copy-book files 
======================================= 
The mainframe files are Used as VSAM files in Informatica by using the Normaliser transformation 
======================================= 
6.Informatica - Which transformation should u need while using 
the cobol sources as source defintions? 
QUESTION #6 Normalizer transformaiton which is used to normalize the data. 
Since cobol sources r oftenly consists of Denormailzed data. 
Click Here to view complete document 
Submitted by: sithusithu 
Normalizer transformaiton 
Cheers, 
Sithu 
Above answer was rated as good by the following members: 
ramonasiraj 
======================================= 
Normalizer transformaiton 
Cheers 
Sithu 
file:///C|/Perl/bin/result.html (6 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html 
======================================= 
Normalizer transformaiton which is used to normalize the data 
======================================= 
7.Informatica - How can U create or import flat file definition in 
to the warehouse designer? 
QUESTION #7 U can not create or import flat file defintion in to warehouse 
designer directly.Instead U must analyze the file in source analyzer,then drag it 
into the warehouse designer.When U drag the flat file source defintion into 
warehouse desginer workspace,the warehouse designer creates a relational target 
defintion not a file defintion.If u want to load to a file,configure the session to 
write to a flat file.When the informatica server runs the session,it creates and 
loads the flatfile. 
Click Here to view complete document 
No best answer available. Please pick the good answer available or submit your answer. 
August 22, 2005 03:23:12 #1 
Praveen 
RE: How can U create or import flat file definition in to the warehouse designer? 
======================================= 
U can create flat file definition in warehouse designer.in the warehouse designer u can create new 
target: select the type as flat file. save it and u can enter various columns for that created target by 
editing its properties.Once the target is created save it. u can import it from the mapping designer. 
======================================= 
Yes you can import flat file directly into Warehouse designer. This way it will import the field 
definitions directly. 
======================================= 
1) Manually create the flat file target definition in warehouse designer 
2)create target definition from a source definition. This is done my dropping a source definition in 
warehouse designer. 
3)Import flat file definitionusing a flat file wizard. ( file must be local to the client machine) 
======================================= 
While creating flatfiles manually we drag and drop the structure from SQ if the structure we need is the 
same as of source for this we need to check-in the source and then drag and drop it into the Flatfile if 
file:///C|/Perl/bin/result.html (7 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html 
not all the columns in the source will be changed as primary keys. 
======================================= 
8.Informatica - What is the maplet? 
QUESTION #8 Maplet is a set of transformations that you build in the maplet 
designer and U can use in multiple mapings. 
Click Here to view complete document 
No best answer available. Please pick the good answer available or submit your answer. 
December 08, 2005 23:38:47 #1 
phani 
RE: What is the maplet? 
======================================= 
For Ex:Suppose we have several fact tables that require a series of dimension keys.Then we can create a 
mapplet which contains a series of Lkp transformations to find each dimension key and use it in each 
fact table mapping instead of creating the same Lkp logic in each mapping. 
======================================= 
Part(sub set) of the Mapping is known as Mapplet 
Cheers 
Sithu 
======================================= 
Set of transforamations where the logic can be reusble 
======================================= 
A mapplet should have a mapplet input transformation which recives input values and a output 
transformation which passes the final modified data to back to the mapping. 
when the mapplet is displayed with in the mapping only input & output ports are displayed so that the 
internal logic is hidden from end-user point of view. 
======================================= 
file:///C|/Perl/bin/result.html (8 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html 
Reusable mapping is known as mapplet & reusable transformation with mapplet 
======================================= 
Maplet is a reusable business logic which can be used in mappings 
======================================= 
A maplet is a reusable object which contains one or more than one transformation which is used to 
populate the data from source to target based on the business logic and we can use the same logic in 
different mappings without creating the mapping again. 
======================================= 
Mapplet Is In Mapplet Designer It is used to create Mapplets. 
======================================= 
A mapplet is a reusable object that represents a set of transformations. Mapplet can be designed using 
mapping designer in informatica power center 
======================================= 
Basically mapplet is a subset of the mapping in which we can have the information of the each 
dimension keys by keeping the different mappings created individually. If we want a series of 
dimension keys in the final fact table we will use mapping designer. 
======================================= 
9.Informatica - what is a transforamation? 
QUESTION #9 It is a repostitory object that generates,modifies or passes data. 
Click Here to view complete document 
No best answer available. Please pick the good answer available or submit your answer. 
November 23, 2005 16:06:23 #1 
sir 
RE: what is a transforamation? 
======================================= 
a transformation is repository object that pass data to the next stage(i.e to the next transformation or 
target) with/with out modifying the data 
======================================= 
It is a process of converting given input to desired output. 
======================================= 
set of operation 
Cheers 
Sithu 
file:///C|/Perl/bin/result.html (9 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html 
======================================= 
Transformation is a repository object of converting a given input to desired output.It can generates 
modifies and passes the data. 
======================================= 
A TransFormation Is a Repository Object. 
That Generates Modifies or Passes Data. 
The Designer Provides a Set of Transformations That perform Specific Functions. 
For Example An AGGREGATOR Transformation Performs Calculations On Groups Of Data. 
======================================= 
10.Informatica - What r the designer tools for creating 
tranformations? 
QUESTION #10 Mapping designer 
Tansformation developer 
Mapplet designer 
Click Here to view complete document 
No best answer available. Please pick the good answer available or submit your answer. 
February 21, 2007 05:29:40 #1 
MANOJ KUMAR PANIGRAHI 
RE: What r the designer tools for creating tranformati... 
======================================= 
There r 2 type of tool r used 4 creating transformation......just like 
Mapping designer 
Mapplet designer 
======================================= 
Mapping Designer 
Maplet Designer 
Transformation Deveoper - for reusable transformation 
======================================= 
11.Informatica - What r the active and passive transforamtions? 
QUESTION #11 An active transforamtion can change the number of rows that 
file:///C|/Perl/bin/result.html (10 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html 
pass through it.A passive transformation does not change the number of rows 
that pass through it. 
Click Here to view complete document 
No best answer available. Please pick the good answer available or submit your answer. 
January 24, 2006 03:32:14 #1 
sithusithu Member Since: December 2005 Contribution: 161 
RE: What r the active and passive transforamtions? 
======================================= 
Transformations can be active or passive. An active transformation can change the number of rows that 
pass through it such as a Filter transformation that removes rows that do not meet the filter condition. 
A passive transformation does not change the number of rows that pass through it such as an 
Expression transformation that performs a calculation on data and passes all rows through the 
transformation. 
Cheers 
Sithu 
======================================= 
Active Transformation : A Transformation which change the number of rows when data is flowing from 
source to target 
Passive Transformation : A transformation which does not change the number of rows when the data is 
flowing from source to target 
======================================= 
12.Informatica - What r the connected or unconnected 
transforamations? 
QUESTION #12 An unconnected transforamtion is not connected to other 
transformations in the mapping.Connected transforamation is connected to 
other transforamtions in the mapping. 
Click Here to view complete document 
file:///C|/Perl/bin/result.html (11 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html 
No best answer available. Please pick the good answer available or submit your answer. 
August 22, 2005 03:26:32 #1 
Praveen 
RE: What r the connected or unconnected transforamations? 
======================================= 
An unconnected transformation cant be connected to another transformation. but it can be called inside 
another transformation. 
======================================= 
Here is the deal 
Connected transformation is a part of your data flow in the pipeline while unconnected Transformation 
is not. 
much like calling a program by name and by reference. 
use unconnected transforms when you wanna call the same transform many times in a single mapping. 
======================================= 
In addition to first answer uncondition transformation are directly connected and can/used in as many as 
other transformations. If you are using a transformation several times use unconditional. You get better 
performance. 
======================================= 
Connect Transformation : 
A transformation which participates in the mapping data flow 
Connected 
transformation can receive multiple inputs and provides multiple outputs 
Unconnected : 
A unconnected transformation does not participate in the mapping data flow 
It can receive multiple inputs and provides single output 
file:///C|/Perl/bin/result.html (12 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html 
Thanks 
Rekha 
======================================= 
13.Informatica - How many ways u create ports? 
QUESTION #13 Two ways 
1.Drag the port from another transforamtion 
2.Click the add buttion on the ports tab. 
Click Here to view complete document 
No best answer available. Please pick the good answer available or submit your answer. 
September 28, 2006 06:31:21 #1 
srinivas.vadlakonda 
RE: How many ways u create ports? 
======================================= 
Two ways 
1.Drag the port from another transforamtion 
2.Click the add buttion on the ports tab. 
======================================= 
we can copy and paste the ports in the ports tab 
======================================= 
14.Informatica - What r the reusable transforamtions? 
QUESTION #14 Reusable transformations can be used in multiple mappings. 
When u need to incorporate this transformation into maping,U add an instance 
of it to maping.Later if U change the definition of the transformation ,all 
instances of it inherit the changes.Since the instance of reusable transforamation 
is a pointer to that transforamtion,U can change the transforamation in the 
transformation developer,its instances automatically reflect these changes.This 
feature can save U great deal of work. 
Click Here to view complete document 
Submitted by: sithusithu 
file:///C|/Perl/bin/result.html (13 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html 
A transformation can reused, that is know as reusable transformation 
You can design using 2 methods 
1. using transformation developer 
2. create normal one and promote it to reusable 
Cheers 
Sithu 
Above answer was rated as good by the following members: 
ramonasiraj 
======================================= 
A transformation can reused that is know as reusable transformation 
You can design using 2 methods 
1. using transformation developer 
2. create normal one and promote it to reusable 
Cheers 
Sithu 
======================================= 
Hai to all friends out there 
the transformation that can be reused is called a reusable transformation. 
as the property suggests it has to be reused: 
so for reusing we can do it in two different ways 
1) by creating normal transformation and making it reusable by checking it in the check box of the 
properties of the edit transformation. 
2) by using transformation developer here what ever transformation is developed it is reusable and it 
can be used in mapping designer where we can further change its properties as per our requirement. 
file:///C|/Perl/bin/result.html (14 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html 
======================================= 
1. A reusable transformation can be used in multiple transformations 
2.The designer stores each reusable transformation as metada separate from 
any mappings that use the transformation. 
3. Every reusable transformation falls within a category of transformations available in the Designer 
4.one can only create an External Procedure transformation as a reusable transformation. 
======================================= 
15.Informatica - What r the methods for creating reusable 
transforamtions? 
QUESTION #15 Two methods 
1.Design it in the transformation developer. 
2.Promote a standard transformation from the mapping designer.After U add a 
transformation to the mapping , U can promote it to the status of reusable 
transformation. 
Once U promote a standard transformation to reusable status,U can demote it to 
a standard transformation at any time. 
If u change the properties of a reusable transformation in mapping,U can revert 
it to the original reusable transformation properties by clicking the revert 
button. 
Click Here to view complete document 
No best answer available. Please pick the good answer available or submit your answer. 
September 12, 2005 12:22:21 #1 
Praveen Vasudev 
RE: methods for creating reusable transforamtions? 
file:///C|/Perl/bin/result.html (15 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html 
======================================= 
PLEASE THINK TWICE BEFORE YOU POST AN ANSWER. 
Answer: Two methods 
1.Design it in the transformation developer. by default its a reusable transform. 
2.Promote a standard transformation from the mapping designer.After U add a transformation to the 
mapping U can promote it to the status of reusable transformation. 
Once U promote a standard transformation to reusable status U CANNOT demote it to a standard 
transformation at any time. 
If u change the properties of a reusable transformation in mapping U can revert it to the original 
reusable transformation properties by clicking the revert button. 
======================================= 
You can design using 2 methods 
1. using transformation developer 
2. create normal one and promote it to reusable 
Cheers 
Sithu 
======================================= 
16.Informatica - What r the unsupported repository objects for a 
mapplet? 
QUESTION #16 COBOL source definition 
Joiner transformations 
Normalizer transformations 
Non reusable sequence generator transformations. 
Pre or post session stored procedures 
Target defintions 
Power mart 3.5 style Look Up functions 
XML source definitions 
IBM MQ source defintions 
file:///C|/Perl/bin/result.html (16 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html 
Click Here to view complete document 
No best answer available. Please pick the good answer available or submit your answer. 
January 19, 2006 04:23:12 #1 
sithusithu Member Since: December 2005 Contribution: 161 
RE: What r the unsupported repository objects for a ma... 
======================================= 
l Source definitions. Definitions of database objects (tables views synonyms) or files that provide 
source data. 
l Target definitions. Definitions of database objects or files that contain the target data. 
l Multi-dimensional metadata. Target definitions that are configured as cubes and dimensions. 
l Mappings. A set of source and target definitions along with transformations containing business 
logic that you build into the transformation. These are the instructions that the Informatica Server uses 
to transform and move data. 
l Reusable transformations. Transformations that you can use in multiple mappings. 
l Mapplets. A set of transformations that you can use in multiple mappings. 
l Sessions and workflows. Sessions and workflows store information about how and when the 
Informatica Server moves data. A workflow is a set of instructions that describes how and when to run 
tasks related to extracting transforming and loading data. A session is a type of task that you can put in 
a workflow. Each session corresponds to a single mapping. 
Cheers 
Sithu 
======================================= 
Hi 
The following answer is from Informatica Help Documnetation 
l You cannot include the following objects in a mapplet: 
l Normalizer transformations 
l Cobol sources 
l XML Source Qualifier transformations 
l XML sources 
file:///C|/Perl/bin/result.html (17 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html 
l Target definitions 
l Pre- and post- session stored procedures 
l Other mapplets 
Shivaji Thaneru 
======================================= 
normaliser xml source qualifier and cobol sources cannot be used 
======================================= 
-Normalizer transformations 
-Cobol sources 
-XML Source Qualifier transformations 
-XML sources 
-Target definitions 
-Pre- and post- session stored procedures 
-Other mapplets 
-PowerMart 3.5-style LOOKUP functions 
-non reusable sequence generator 
======================================= 
17.Informatica - What r the mapping paramaters and maping 
variables? 
QUESTION #17 Maping parameter represents a constant value that U can define 
before running a session.A mapping parameter retains the same value throughout 
the entire session. 
When u use the maping parameter ,U declare and use the parameter in a maping 
or maplet.Then define the value of parameter in a parameter file for the session. 
Unlike a mapping parameter,a maping variable represents a value that can 
change throughout the session.The informatica server saves the value of maping 
variable to the repository at the end of session run and uses that value next time 
U run the session. 
Click Here to view complete document 
No best answer available. Please pick the good answer available or submit your answer. 
September 12, 2005 12:30:13 #1 
Praveen Vasudev 
file:///C|/Perl/bin/result.html (18 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html 
RE: mapping varibles 
======================================= 
Please refer to the documentation for more understanding. 
Mapping variables have two identities: 
Start value and Current value 
Start value Current value ( when the session starts the execution of the undelying mapping) 
Start value <> Current value ( while the session is in progress and the variable value changes in one 
ore more occasions) 
Current value at the end of the session is nothing but the start value for the subsequent run of the 
same session. 
======================================= 
You can use mapping parameters and variables in the SQL query user-defined join and source filter of a 
Source Qualifier transformation. You can also use the system variable $$$SessStartTime. 
The Informatica Server first generates an SQL query and scans the query to replace each mapping 
parameter or variable with its start value. Then it executes the query on the source database. 
Cheers 
Sithu 
======================================= 
Mapping parameter represents a constant value defined before mapping run. 
Mapping reusability can be achieved by using mapping parameters. 
file:///C|/Perl/bin/result.html (19 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html 
Mapping variable represents a value that can be changed during the mapping run. 
Mapping variable can be used in incremental loading process. 
======================================= 
18.Informatica - Can U use the maping parameters or variables 
created in one maping into another maping? 
QUESTION #18 NO. 
We can use mapping parameters or variables in any transformation of the same 
maping or mapplet in which U have created maping parameters or variables. 
Click Here to view complete document 
Submitted by: Ray 
NO. You might want to use a workflow parameter/variable if you want it to be visible with other 
mappings/sessions 
Above answer was rated as good by the following members: 
ramonasiraj 
======================================= 
NO. You might want to use a workflow parameter/variable if you want it to be visible with other 
mappings/sessions 
======================================= 
Hi 
The following sentences extracted from Informatica help as it is.Did it support the above to answers. 
After you create a parameter you can use it in the Expression Editor of any transformation in a mapping 
or mapplet. You can also use it in Source Qualifier transformations and reusable transformations. 
Shivaji Thaneru 
======================================= 
I differ on this; we can use global variable in sessions as well as in mappings. This provision is 
provided in Informatica 7.1.x versions; I have used it. Please check this in properties. 
Regards 
-Vaibhav 
======================================= 
hi 
file:///C|/Perl/bin/result.html (20 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html 
Thanx Shivaji but the statement does not completely answer the question. 
a mapping parameter can be used in reusable transformation 
but does it mean u can use the mapping parameter whereever the instances of the reusable 
transformation are used? 
======================================= 
The scope of a mapping variable is the mapping in which it is defined. A variable Var1 defined in 
mapping Map1 can only be used in Map1. You cannot use it in another mapping say Map2. 
======================================= 
19.Informatica - Can u use the maping parameters or variables 
created in one maping into any other reusable transform 
QUESTION #19 Yes.Because reusable tranformation is not contained with any 
maplet or maping. 
Click Here to view complete document 
No best answer available. Please pick the good answer available or submit your answer. 
February 02, 2007 17:06:04 #1 
mahesh4346 Member Since: January 2007 Contribution: 6 
RE: Can u use the maping parameters or variables creat... 
======================================= 
But when one cant use Mapping parameters and variables of one mapping in another Mapping then how 
can that be used in reusable transformation when Reusable transformations themselves can be used 
among multiple mappings?So I think one cant use Mapping parameters and variables in reusable 
transformationsPlease correct me if i am wrong 
======================================= 
Hi you can use the mapping parameters or variables in a reusable transformation. And when you use the 
xformation in a mapping during execution of the session it validates if the mapping parameter that is 
used in the xformation is defined with this mapping or not. If not the session fails. 
======================================= 
20.Informatica - How can U improve session performance in 
aggregator transformation? 
QUESTION #20 Use sorted input. 
file:///C|/Perl/bin/result.html (21 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html 
Click Here to view complete document 
No best answer available. Please pick the good answer available or submit your answer. 
September 12, 2005 12:34:09 #1 
Praveen Vasudev 
RE: 
======================================= 
use sorted input: 
1. use a sorter before the aggregator 
2. donot forget to check the option on the aggregator that tell the aggregator that the input is sorted on 
the same keys as group by. 
the key order is also very important. 
======================================= 
hi 
You can use the following guidelines to optimize the performance of an Aggregator transformation. 
Use sorted input to decrease the use of aggregate caches. 
Sorted input reduces the amount of data cached during the session and improves session performance. 
Use this option with the Sorter transformation to pass sorted data to the Aggregator transformation. 
Limit connected input/output or output ports. 
Limit the number of connected input/output or output ports to reduce the amount of data the Aggregator 
transformation stores in the data cache. 
Filter before aggregating. 
file:///C|/Perl/bin/result.html (22 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html 
If you use a Filter transformation in the mapping place the transformation before the Aggregator 
transformation to reduce unnecessary aggregation. 
Shivaji T 
======================================= 
Following are the 3 ways with which we can improve the session performance:- 
a) Use sorted input to decrease the use of aggregate caches. 
b) Limit connected input/output or output ports 
c) Filter before aggregating (if you are using any filter condition) 
======================================= 
By using Incrimental aggrigation also we can improve performence.Becaue it passes the new data to the 
mapping and uses historical data to perform aggrigation 
======================================= 
to improve session performance in aggregator transformation enable the session option Incremental 
Aggregation 
======================================= 
-Use sorted input to decrease the use of aggregate caches. 
-Limit connected input/output or output ports. 
Limit the number of connected input/output or output ports to reduce the amount of data the Aggregator 
transformation stores in the data cache. 
-Filter the data before aggregating it. 
======================================= 
21.Informatica - What is aggregate cache in aggregator 
transforamtion? 
QUESTION #21 The aggregator stores data in the aggregate cache until it 
completes aggregate calculations.When u run a session that uses an aggregator 
transformation,the informatica server creates index and data caches in memory 
to process the transformation.If the informatica server requires more space,it 
stores overflow values in cache files. 
Click Here to view complete document 
No best answer available. Please pick the good answer available or submit your answer. 
file:///C|/Perl/bin/result.html (23 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html 
January 19, 2006 05:00:00 #1 
sithusithu Member Since: December 2005 Contribution: 161 
RE: What is aggregate cache in aggregator transforamti... 
======================================= 
When you run a workflow that uses an Aggregator transformation the Informatica Server creates index 
and data caches in memory to process the transformation. If the Informatica Server requires more space 
it stores overflow values in cache files. 
Cheers 
Sithu 
======================================= 
Aggregate cache contains data values while aggregate calculations are being performed. Aggregate 
cache is made up of index cache and data cache. Index cache contains group values and data cache 
consists of row values. 
======================================= 
when server runs the session with aggregate transformation it stores data in memory until it completes 
the aggregation 
when u partition a source the server creates one memory cache and one disk cache for each partition .it 
routes the data from one partition to another based on group key values of the transformation 
======================================= 
22.Informatica - What r the diffrence between joiner 
transformation and source qualifier transformation? 
QUESTION #22 U can join hetrogenious data sources in joiner transformation 
which we can not achieve in source qualifier transformation. 
U need matching keys to join two relational sources in source qualifier 
transformation.Where as u doesn’t need matching keys to join two sources. 
Two relational sources should come from same datasource in sourcequalifier.U 
can join relatinal sources which r coming from diffrent sources also. 
Click Here to view complete document 
No best answer available. Please pick the good answer available or submit your answer. 
file:///C|/Perl/bin/result.html (24 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html 
January 27, 2006 01:45:56 #1 
sithusithu Member Since: December 2005 Contribution: 161 
RE: What r the diffrence between joiner transformation... 
======================================= 
Source qualifier Homogeneous source 
Joiner Heterogeneous source 
Cheers 
Sithu 
======================================= 
Hi 
The Source Qualifier transformation provides an alternate way to filter rows. Rather than filtering rows 
from within a mapping the Source Qualifier transformation filters rows when read from a source. The 
main difference is that the source qualifier limits the row set extracted from a source while the Filter 
transformation limits the row set sent to a target. Since a source qualifier reduces the number of rows 
used throughout the mapping it provides better performance. 
However the Source Qualifier transformation only lets you filter rows from relational sources while the 
Filter transformation filters rows from any type of source. Also note that since it runs in the database 
you must make sure that the filter condition in the Source Qualifier transformation only uses standard 
SQL. 
Shivaji Thaneru 
======================================= 
hi as per my knowledge you need matching keys to join two relational sources both in Source qualifier 
as well as in Joiner transformation. But the difference is that in Source qualifier both the keys must have 
primary key - foreign key relation Whereas in Joiner transformation its not needed. 
file:///C|/Perl/bin/result.html (25 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html 
======================================= 
source qualifier is used for reading the data from the database where as joiner transformation is used for 
joining two data tables. 
source qualifier can also be used to join two tables but the condition is that both the table should be 
from relational database and it should have the primary key with the same data structure. 
using joiner we can join data from two heterogeneous sources like two flat files or one file from 
relational and one file from flat. 
======================================= 
23.Informatica - In which condtions we can not use joiner 
transformation(Limitaions of joiner transformation)? 
QUESTION #23 Both pipelines begin with the same original data source. 
Both input pipelines originate from the same Source Qualifier transformation. 
Both input pipelines originate from the same Normalizer transformation. 
Both input pipelines originate from the same Joiner transformation. 
Either input pipelines contains an Update Strategy transformation. 
Either input pipelines contains a connected or unconnected Sequence Generator 
transformation. 
Click Here to view complete document 
No best answer available. Please pick the good answer available or submit your answer. 
January 25, 2006 12:18:35 #1 
Surendra 
RE: In which condtions we can not use joiner transform... 
======================================= 
This is no longer valid in version 7.2 
Now we can use a joiner even if the data is coming from the same source. 
SK 
======================================= 
You cannot use a Joiner transformation in the following situations(according to infa 7.1): 
file:///C|/Perl/bin/result.html (26 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html 
©Either input pipeline contains an Update Strategy transformation. 
©You connect a Sequence Generator transformation directly before the Joiner 
transformation. 
======================================= 
I don't understand the second one which says we have a sequence generator? Please can you explain on 
that one? 
======================================= 
Can you please let me know the correct and clear answer for Limitations of joiner transformation? 
swapna 
======================================= 
You cannot use a Joiner transformation in the following situations(according to infa 7.1): When You 
connect a Sequence Generator transformation directly before the Joiner transformation. 
for more information check out the informatica manual7.1 
======================================= 
What about join conditions. Can we have a ! condition in joiner. 
======================================= 
No in a joiner transformation you can only use an equal to ( ) as a join condition 
Any other sort of comparison operators is not allowed 
> < ! <> etc are not allowed as a join condition 
Utsav 
======================================= 
Yes joiner only supports equality condition 
The Joiner transformation does not match null values. For example if both EMP_ID1 and EMP_ID2 
from the example above contain a row with a null value the PowerCenter Server does not consider them 
a match and does not join the two rows. To join rows with null values you can replace null input with 
default values and then join on the default values. 
======================================= 
We cannot use joiner transformation in the following two conditions:- 
1. When our data comes through Update Strategy transformation or in other words after Update strategy 
we cannot add joiner transformation 
2. We cannot connect a Sequence Generator transformation directly before the Joiner transformation. 
======================================= 
24.Informatica - what r the settiings that u use to cofigure the 
joiner transformation? 
file:///C|/Perl/bin/result.html (27 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html 
QUESTION #24 Master and detail source 
Type of join 
Condition of the join 
Click Here to view complete document 
Submitted by: sithusithu 
l Master and detail source 
l Type of join 
l Condition of the join 
the Joiner transformation supports the following join types, which you set in the Properties tab: 
l Normal (Default) 
l Master Outer 
l Detail Outer 
l Full Outer 
Cheers, 
Sithu 
Above answer was rated as good by the following members: 
vivek1708 
======================================= 
l Master and detail source 
l Type of join 
l Condition of the join 
the Joiner transformation supports the following join types which you set in the Properties tab: 
l Normal (Default) 
l Master Outer 
l Detail Outer 
l Full Outer 
file:///C|/Perl/bin/result.html (28 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html 
Cheers 
Sithu 
======================================= 
There are number of properties that you use to configure a joiner 
transformation are: 
1) CASE SENSITIVE STRING COMPARISON: To join the string based on the case 
sensitive basis. 
2) WORKING DIRECTORY: Where to create the caches. 
3) JOIN CONDITION : Like join on a.s v.n 
4) JOIN TYPE: (Normal or master or detail or full) 
5) NULL ORDERING IN MASTER 
6) NULL ORDERING IN DETAIL 
7) TRACING LEVEL: Level of detail about the operations. 
8) INDEX CACHE: Store group value of the input if any. 
9) DATA CACHE: Store value of each row of data. 
10) SORTED INPUT: Check box will be there and will have to check it if the 
input to the joiner is sorted. 
file:///C|/Perl/bin/result.html (29 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html 
11) TRANSFORMATION SCOPE: The data to taken into consideration. (transcation 
or all input) transaction if it depends only on the processed rows while look up 
if it depends on other data when it processes a row. Ex-joiner using the same 
source in the pipeline so data is within the scope so transaction. using a 
lookup so it depends on other data or if dynamic cache is enabled it has to 
process on the other incoming data so will have to go for all input. 
======================================= 
25.Informatica - What r the join types in joiner transformation? 
QUESTION #25 Normal (Default) 
Master outer 
Detail outer 
Full outer 
Click Here to view complete document 
No best answer available. Please pick the good answer available or submit your answer. 
September 12, 2005 12:38:39 #1 
Praveen Vasudev 
RE: 
======================================= 
Normal (Default) -- only matching rows from both master and detail 
Master outer -- all detail rows and only matching rows from master 
Detail outer -- all master rows and only matching rows from detail 
Full outer -- all rows from both master and detail ( matching or non matching) 
======================================= 
follw this 
1. In the Mapping Designer choose Transformation-Create. Select the Joiner transformation. Enter 
a name click OK. 
The naming convention for Joiner transformations is JNR_TransformationName. Enter a description for 
the transformation. This description appears in the Repository Manager making it easier for you or 
others to understand or remember what the transformation does. 
file:///C|/Perl/bin/result.html (30 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html 
The Designer creates the Joiner transformation. Keep in mind that you cannot use a Sequence Generator 
or Update Strategy transformation as a source to a Joiner transformation. 
1. Drag all the desired input/output ports from the first source into the Joiner transformation. 
The Designer creates input/output ports for the source fields in the Joiner as detail fields by default. 
You can edit this property later. 
1. Select and drag all the desired input/output ports from the second source into the Joiner 
transformation. 
The Designer configures the second set of source fields and master fields by default. 
1. Double-click the title bar of the Joiner transformation to open the Edit Transformations dialog 
box. 
1. Select the Ports tab. 
1. Click any box in the M column to switch the master/detail relationship for the sources. Change 
the master/detail relationship if necessary by selecting the master source in the M column. 
Tip: Designating the source with fewer unique records as master increases performance during a join. 
1. Add default values for specific ports as necessary. 
Certain ports are likely to contain NULL values since the fields in one of the sources may be empty. 
You can specify a default value if the target database does not handle NULLs. 
1. Select the Condition tab and set the condition. 
1. Click the Add button to add a condition. You can add multiple conditions. The master and detail 
ports must have matching datatypes. The Joiner transformation only supports equivalent ( ) joins: 
10. Select the Properties tab and enter any additional settings for the transformations. 
1. Click OK. 
1. Choose Repository-Save to save changes to the mapping. 
Cheers 
file:///C|/Perl/bin/result.html (31 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html 
Sithu 
======================================= 
26.Informatica - What r the joiner caches? 
QUESTION #26 When a Joiner transformation occurs in a session, the 
Informatica Server reads all the records from the master source and builds index 
and data caches based on the master rows. 
After building the caches, the Joiner transformation reads records from the detail 
source and perform joins. 
Click Here to view complete document 
Submitted by: bneha15 
For version 7.x and above : 
When the PowerCenter Server processes a Joiner transformation, it reads rows from both sources 
concurrently and builds the index and data cache based on the master rows. The PowerCenter Server 
then performs the join based on the detail source data and the cache data. To improve performance for 
an unsorted Joiner transformation, use the source with fewer rows as the master source. To improve 
performance for a sorted Joiner transformation, use the source with fewer duplicate key values as the 
master. 
Above answer was rated as good by the following members: 
vivek1708 
======================================= 
From a performance perspective.always makes the smaller of the two joining tables to be the master. 
======================================= 
Specifies the directory used to cache master records and the index to these records. By default the 
cached files are created in a directory specified by the server variable $PMCacheDir. If you override the 
directory make sure the directory exists and contains enough disk space for the cache files. The 
directory can be a mapped or mounted drive. 
Cheers 
Sithu 
======================================= 
For version 7.x and above : 
When the PowerCenter Server processes a Joiner transformation it reads rows from both sources 
concurrently and builds the index and data cache based on the master rows. The PowerCenter Server 
then performs the join based on the detail source data and the cache data. To improve performance for 
an unsorted Joiner transformation use the source with fewer rows as the master source. To improve 
file:///C|/Perl/bin/result.html (32 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html 
performance for a sorted Joiner transformation use the source with fewer duplicate key values as the 
master. 
======================================= 
27.Informatica - what is the look up transformation? 
QUESTION #27 Use lookup transformation in u’r mapping to lookup data in a 
relational table,view,synonym. 
Informatica server queries the look up table based on the lookup ports in the 
transformation.It compares the lookup transformation port values to lookup 
table column values based on the look up condition. 
Click Here to view complete document 
No best answer available. Please pick the good answer available or submit your answer. 
December 09, 2005 00:06:38 #1 
phani 
RE: what is the look up transformation? 
======================================= 
Using it we can access the data from a relational table which is not a source in the mapping. 
For Ex:Suppose the source contains only Empno but we want Empname also in the mapping.Then 
instead of adding another tbl which contains Empname as a source we can Lkp the table and get the 
Empname in target. 
======================================= 
A lookup is a simple single-level reference structure with no parent/child relationships. Use a lookup 
when you have a set of reference members that you do not need to organize hierarchically. 
======================================= 
In DecisionStream a lookup is a simple single-level reference structure with no parent/child 
relationships. Use a lookup when you have a set of reference members that you do not need to organize 
hierarchically.HTH 
======================================= 
Use a Lookup transformation in your mapping to look up data in a relational table view or synonym. 
Import a lookup definition from any relational database to which both the Informatica Client and Server 
can connect. You can use multiple Lookup transformations in a mapping. 
file:///C|/Perl/bin/result.html (33 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html 
Cheers 
Sithu 
======================================= 
Lookup transformation in a mapping is used to look up data in a flat file or a relational table view or 
synonym. You can import a lookup definition from any flat file or relational database to which both the 
PowerCenter Client and Server can connect. You can use multiple Lookup transformations in a 
mapping. 
I hope this would be helpful for you. 
Cheers 
Sridhar 
======================================= 
28.Informatica - Why use the lookup transformation ? 
QUESTION #28 To perform the following tasks. 
Get a related value. For example, if your source table includes employee ID, but 
you want to include the employee name in your target table to make your 
summary data easier to read. 
Perform a calculation. Many normalized tables include values used in a 
calculation, such as gross sales per invoice or sales tax, but not the calculated 
value (such as net sales). 
Update slowly changing dimension tables. You can use a Lookup transformation 
to determine whether records already exist in the target. 
Click Here to view complete document 
No best answer available. Please pick the good answer available or submit your answer. 
August 21, 2006 22:26:47 #1 
samba 
RE: Why use the lookup transformation ? 
file:///C|/Perl/bin/result.html (34 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html 
======================================= 
Lookup table is nothing but the lookup on a table view synonym and falt file. 
by using lookup we can get a related value with join conditon and performs caluclations 
two types of lookups are there 
1)connected 
2)Unconnected 
connected lookup is with in pipeline only but unconnected lookup is not connected to pipeline 
unconneceted lookup returns single column value only 
let me know if you want to know any additions information 
cheers 
samba 
======================================= 
hey with regard to look up is there a dynamic look up and static look up ? if so how do you set it. and is 
there a combinationation of dynamic connected lookups..and static unconnected look ups.? 
======================================= 
look up has two types Connected and Unconnected..Usually we use look-up so as to get the related 
Value from a table ..it has Input port Output Port Lookup Port and Return Port..where Lookup Port 
looks up the corresponding column for the value and resturn port returns the value..usually we use when 
there are no columns in common 
======================================= 
For maintaining the slowly changing diamensions 
======================================= 
Hi 
The ans to ur question is Yes 
There are 2 types of lookups: Dynamic and Normal (which you have termed as Static). 
To configure just double click on the lookup transformation and go to properties tab 
file:///C|/Perl/bin/result.html (35 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html 
There'll be an option - dynamic lookup cache. select that... 
If u dont select this option then the lookup is merely a normal lookup. 
Please let me know if there are any questions. 
Thanks. 
======================================= 
29.Informatica - What r the types of lookup? 
QUESTION #29 Connected and unconnected 
Click Here to view complete document 
No best answer available. Please pick the good answer available or submit your answer. 
November 08, 2005 18:44:53 #1 
swati 
RE: What r the types of lookup? 
======================================= 
i>connected 
ii>unconnected 
iii>cached 
iv>uncached 
======================================= 
1. 
Connected lookup 
2. 
Unconnected lookup 
1. 
file:///C|/Perl/bin/result.html (36 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html 
Persistent cache 
2. 
Re-cache from database 
3. 
Static cache 
4. 
Dynamic cache 
5. 
Shared cache 
Cheers 
Sithu 
======================================= 
hello boss/madam 
only two types of lookup are there they are: 
1) Connected lookup 
2) Unconnected lookup. 
I don't understand why people are specifying the cache types I want to know that now a days caches are 
also taken into this category of lookup. 
If yes do specify on the answer list 
thankyou 
======================================= 
30.Informatica - Differences between connected and unconnected 
lookup? 
QUESTION #30 
Connected lookup Unconnected lookup 
Receives input values diectly 
from the pipe line. 
Receives input values from the result of a lkp 
expression in a another transformation. 
U can use a dynamic or static 
cache 
U can use a static cache. 
Cache includes all lookup 
columns used in the maping 
Cache includes all lookup out put ports in the 
lookup condition and the lookup/return port. 
Support user defined default 
values 
Does not support user defiend default values 
file:///C|/Perl/bin/result.html (37 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html 
Click Here to view complete document 
No best answer available. Please pick the good answer available or submit your answer. 
February 03, 2006 03:25:15 #1 
Prasanna 
RE: Differences between connected and unconnected look... 
======================================= 
In addition: 
Connected Lookip can return/pass multiple rows/groups of data whereas unconnected can return only 
one port. 
======================================= 
In addition to this: In Connected lookup if the condition is not satisfied it returns '0'. In UnConnected 
lookup if the condition is not satisfied it returns 'NULL'. 
======================================= 
Hi 
Differences Between Connected and Unconnected Lookups 
Connected Lookup Unconnected Lookup 
Receives input values directly from the pipeline. 
Receives input values from the result of a :LKP 
expression in another transformation. 
You can use a dynamic or static cache. You can use a static cache. 
Cache includes all lookup columns used in the 
mapping (that is lookup source columns included in 
the lookup condition and lookup source columns 
linked as output ports to other transformations). 
Cache includes all lookup/output ports in the 
lookup condition and the lookup/return port. 
Can return multiple columns from the same row or 
insert into the dynamic lookup cache. 
Designate one return port (R). Returns one 
column from each row. 
file:///C|/Perl/bin/result.html (38 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html 
If there is no match for the lookup condition the 
PowerCenter Server returns the default value for all 
output ports. If you configure dynamic caching the 
PowerCenter Server inserts rows into the cache or 
leaves it unchanged. 
If there is no match for the lookup condition 
the PowerCenter Server returns NULL. 
If there is a match for the lookup condition the 
PowerCenter Server returns the result of the lookup 
condition for all lookup/output ports. If you 
configure dynamic caching the PowerCenter Server 
either updates the row the in the cache or leaves the 
row unchanged. 
If there is a match for the lookup condition the 
PowerCenter Server returns the result of the 
lookup condition into the return port. 
Pass multiple output values to another 
transformation. Link lookup/output ports to another 
transformation. 
Pass one output value to another 
transformation. The lookup/output/return port 
passes the value to the transformation calling : 
LKP expression. 
Supports user-defined default values. Does not support user-defined default values. 
Shivaji Thaneru 
======================================= 
31.Informatica - what is meant by lookup caches? 
QUESTION #31 The informatica server builds a cache in memory when it 
processes the first row af a data in a cached look up transformation.It allocates 
memory for the cache based on the amount u configure in the transformation or 
session properties.The informatica server stores condition values in the index 
cache and output values in the data cache. 
Click Here to view complete document 
No best answer available. Please pick the good answer available or submit your answer. 
September 28, 2006 06:34:33 #1 
srinivas vadlakonda 
RE: what is meant by lookup caches? 
file:///C|/Perl/bin/result.html (39 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html 
======================================= 
lookup cache is the temporary memory that is created by the informatica server to hold the lookup data 
and to perform the lookup conditions 
======================================= 
A LookUP Cache is a Temporary Memory Area which is Created by the Informatica Server. which 
stores the Lookup data based on certain Conditions. The Caches are of Three types 1) Persistent 2) 
Dynamic 3) Static and 4) Shared Cache. 
======================================= 
32.Informatica - What r the types of lookup caches? 
QUESTION #32 Persistent cache: U can save the lookup cache files and reuse 
them the next time the informatica server processes a lookup transformation 
configured to use the cache. 
Recache from database: If the persistent cache is not synchronized with he 
lookup table,U can configure the lookup transformation to rebuild the lookup 
cache. 
Static cache: U can configure a static or readonly cache for only lookup table.By 
default informatica server creates a static cache.It caches the lookup table and 
lookup values in the cache for each row that comes into the transformation.when 
the lookup condition is true,the informatica server does not update the cache 
while it prosesses the lookup transformation. 
Dynamic cache: If u want to cache the target table and insert new rows into 
cache and the target,u can create a look up transformation to use dynamic cache. 
The informatica server dynamically inerts data to the target table. 
shared cache: U can share the lookup cache between multiple transactions.U can 
share unnamed cache between transformations inthe same maping. 
Click Here to view complete document 
No best answer available. Please pick the good answer available or submit your answer. 
December 13, 2005 06:02:36 #1 
Sithu 
file:///C|/Perl/bin/result.html (40 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html 
RE: What r the types of lookup caches? 
======================================= 
Cache 
1. Static cache 
2. Dynamic cache 
3. Persistent cache 
Sithu 
======================================= 
Cache are three types namely Dynamic cache Static cache Persistent cache 
Cheers 
Sithu 
======================================= 
Dynamic cache 
Persistence Cache 
Re cache 
Shared Cache 
======================================= 
hi could any one get me information where you would use these caches for look ups and how do you set 
them. 
file:///C|/Perl/bin/result.html (41 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html 
thanks 
infoseeker 
======================================= 
There are 4 types of lookup cache - 
Persistent Recache Satic & Dynamic. 
Bye 
Stephen 
======================================= 
Types of Caches are : 
1) Dynamic Cache 
2) Static Cache 
3) Persistent Cache 
4) Shared Cache 
5) Unshared Cache 
======================================= 
There are five types of caches such as 
static cache 
dynamic cache 
persistant cache 
shared cache etc... 
======================================= 
33.Informatica - Difference between static cache and dynamic 
cache 
QUESTION #33 
Static cache Dynamic cache 
U can not insert or update the cache 
U can insert rows into the cache as u 
pass to the target 
The informatic server returns a value from the lookup 
table or cache when the condition is true.When the 
condition is not true, informatica server returns the 
default value for connected transformations and null for 
unconnected transformations. 
The informatic server inserts rows 
into cache when the condition is false. 
This indicates that the the row is not 
in the cache or target table. U can 
pass these rows to the target table 
Click Here to view complete document 
Submitted by: vp 
file:///C|/Perl/bin/result.html (42 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html 
lets say for example your lookup table is your target table. So when you create the Lookup selecting the 
dynamic cache what It does is it will lookup values and if there is no match it will insert the row in both 
the target and the lookup cache (hence the word dynamic cache it builds up as you go along), or if there 
is a match it will update the row in the target. On the other hand Static caches dont get updated when 
you do a lookup. 
Above answer was rated as good by the following members: 
ssangi, ananthece 
======================================= 
lets say for example your lookup table is your target table. So when you create the Lookup selecting the 
dynamic cache what It does is it will lookup values and if there is no match it will insert the row in both 
the target and the lookup cache (hence the word dynamic cache it builds up as you go along) or if there 
is a match it will update the row in the target. On the other hand Static caches dont get updated when 
you do a lookup. 
======================================= 
34.Informatica - Which transformation should we use to 
normalize the COBOL and relational sources? 
QUESTION #34 Normalizer Transformation. 
When U drag the COBOL source in to the mapping Designer workspace,the 
normalizer transformation automatically appears,creating input and output 
ports for every column in the source. 
Click Here to view complete document 
No best answer available. Please pick the good answer available or submit your answer. 
January 19, 2006 01:08:06 #1 
sithusithu Member Since: December 2005 Contribution: 161 
RE: Which transformation should we use to normalize th... 
file:///C|/Perl/bin/result.html (43 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html 
======================================= 
The Normalizer transformation normalizes records from COBOL and relational sources allowing you to 
organize the data according to your own needs. A Normalizer transformation can appear anywhere in a 
data flow when you normalize a relational source. Use a Normalizer transformation instead of the 
Source Qualifier transformation when you normalize a COBOL source. When you drag a COBOL 
source into the Mapping Designer workspace the Normalizer transformation automatically appears 
creating input and output ports for every column in the source 
Cheers 
Sithu 
======================================= 
35.Informatica - How the informatica server sorts the string 
values in Ranktransformation? 
QUESTION #35 When the informatica server runs in the ASCII data movement 
mode it sorts session data using Binary sortorder.If U configure the seeion to use 
a binary sort order,the informatica server caluculates the binary value of each 
string and returns the specified number of rows with the higest binary values for 
the string. 
Click Here to view complete document 
No best answer available. Please pick the good answer available or submit your answer. 
December 09, 2005 00:25:27 #1 
phani 
RE: How the informatica server sorts the string values... 
file:///C|/Perl/bin/result.html (44 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html 
======================================= 
When Informatica Server runs in UNICODE data movement mode then it uses the sort order configured 
in session properties. 
======================================= 
36.Informatica - What is the Rankindex in Ranktransformation? 
QUESTION #36 The Designer automatically creates a RANKINDEX port for 
each Rank transformation. The Informatica Server uses the Rank Index port to 
store the ranking position for each record in a group. For example, if you create a 
Rank transformation that ranks the top 5 salespersons for each quarter, the rank 
index numbers the salespeople from 1 to 5: 
Click Here to view complete document 
No best answer available. Please pick the good answer available or submit your answer. 
January 12, 2006 04:41:57 #1 
sithusithu Member Since: December 2005 Contribution: 161 
RE: What is the Rankindex in Ranktransformation? 
======================================= 
Based on which port you want generate Rank is known as rank port the generated values are known as 
rank index. 
Cheers 
Sithu 
======================================= 
37.Informatica - What is the Router transformation? 
QUESTION #37 A Router transformation is similar to a Filter transformation 
because both transformations allow you to use a condition to test data. 
However, a Filter transformation tests data for one condition and drops the rows 
of data that do not meet the condition. A Router transformation tests data for 
one or more conditions and gives you the option to route rows of data that do 
not meet any of the conditions to a default output group. 
file:///C|/Perl/bin/result.html (45 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html 
If you need to test the same input data based on multiple conditions, use a 
Router Transformation in a mapping instead of creating multiple Filter 
transformations to perform the same task. 
Click Here to view complete document 
No best answer available. Please pick the good answer available or submit your answer. 
January 19, 2006 04:46:42 #1 
sithusithu Member Since: December 2005 Contribution: 161 
RE: What is the Router transformation? 
======================================= 
A Router transformation is similar to a Filter transformation because both transformations allow you 
to use a condition to test data. A Filter transformation tests data for one condition and drops the rows of 
data that do not meet the condition. However a Router transformation tests data for one or more 
conditions and gives you the option to route rows of data that do not meet any of the conditions to a 
default output group. 
Cheers 
Sithu 
======================================= 
Note:- i think the definition and purpose of Router transformation define by sithusithu sithu is not clear 
and not fully correct as they of have mentioned 
<A Router transformation tests data for one or more conditions > 
sorry sithu and sithusithu 
but i want to clarify is that in Filter transformation also we can give so many conditions together. eg. 
empno 1234 and sal>25000 (2conditions) 
Actual Purposes of Router Transformation are:- 
1. Similar as filter transformation to sort the source data according to the condition applied. 
2. When we want to load data into different target tables from same source but with different condition 
as per target tables requirement. 
file:///C|/Perl/bin/result.html (46 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html 
e.g. From emp table we want to load data in three(3) different target tables T1(where deptno 10) T2 
(where deptno 20) and T3(where deptno 30). 
For this if we use filter transformation we need three(3) filter transformations 
So instead of using three(3) filter transformation we will use only one(1) Router transformation. 
Advantages:- 
1. Better Performance because in mapping the Router transformation Informatica server processes the 
input data only once instead of three as in filter transformation. 
2. Less complexity because we use only one router transformation instead of multiple filter 
transformation. 
Router Transformation is :- Active and Connected. 
======================================= 
38.Informatica - What r the types of groups in Router 
transformation? 
QUESTION #38 Input group Output group 
The designer copies property information from the input ports of the input group 
to create a set of output ports for each output group. 
Two types of output groups 
User defined groups 
Default group 
U can not modify or delete default groups. 
Click Here to view complete document 
No best answer available. Please pick the good answer available or submit your answer. 
December 09, 2005 00:35:44 #1 
phani 
RE: What r the types of groups in Router transformatio... 
file:///C|/Perl/bin/result.html (47 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html 
======================================= 
Input group contains the data which is coming from the source.We can create as many user-defined 
groups as required for each condition we want to specify.Default group contains all the rows of data 
that doesn't satisfy the condition of any group. 
======================================= 
A Router transformation has the following types of groups: 
l Input 
l Output 
Input Group 
The Designer copies property information from the input ports of the input group to create a set of 
output ports for each output group. 
Output Groups 
There are two types of output groups: 
l User-defined groups 
l Default group 
You cannot modify or delete output ports or their properties. 
Cheers 
Sithu 
======================================= 
39.Informatica - Why we use stored procedure transformation? 
QUESTION #39 For populating and maintaining data bases. 
Click Here to view complete document 
No best answer available. Please pick the good answer available or submit your answer. 
file:///C|/Perl/bin/result.html (48 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html 
January 19, 2006 04:41:34 #1 
sithusithu Member Since: December 2005 Contribution: 161 
RE: Why we use stored procedure transformation? 
======================================= 
A Stored Procedure transformation is an important tool for populating and maintaining databases. 
Database administrators create stored procedures to automate time-consuming tasks that are too 
complicated for standard SQL statements. 
Cheers 
Sithu 
======================================= 
You might use stored procedures to do the following tasks: 
l Check the status of a target database before loading data into it. 
l Determine if enough space exists in a database. 
l Perform a specialized calculation. 
l Drop and recreate indexes. 
Shivaji Thaneru 
======================================= 
You might use stored procedures to do the following tasks: 
l Check the status of a target database before loading data into it. 
l Determine if enough space exists in a database. 
l Perform a specialized calculation. 
l Drop and recreate indexes. 
Shivaji Thaneru 
======================================= 
we use a stored procedure transformation to execute a stored procedure which in turn might do the 
file:///C|/Perl/bin/result.html (49 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html 
above things in a database and more. 
======================================= 
can you give me a real time scenario please? 
======================================= 
40.Informatica - What is source qualifier transformation? 
QUESTION #40 When U add a relational or a flat file source definition to a 
maping,U need to connect it to 
a source qualifer transformation.The source qualifier transformation represnets 
the records 
that the informatica server reads when it runs a session. 
Click Here to view complete document 
Submitted by: Rama Rao B. 
Source qualifier is also a table, it acts as an intermediator in between source and target metadata. And, it 
also generates sql, which creating mapping in between source and target metadatas. 
Thanks, 
Rama Rao 
Above answer was rated as good by the following members: 
him.life 
======================================= 
When you add a relational or a flat file source definition to a mapping you need to connect it to a 
Source Qualifier transformation. The Source Qualifier represents the rows that the Informatica Server 
reads when it executes a session. 
l Join data originating from the same source database. You can join two or more tables with 
primary-foreign key relationships by linking the sources to one Source Qualifier. 
l Filter records when the Informatica Server reads source data. If you include a filter condition the 
Informatica Server adds a WHERE clause to the default query. 
l Specify an outer join rather than the default inner join. If you include a user-defined join the 
Informatica Server replaces the join information specified by the metadata in the SQL query. 
l Specify sorted ports. If you specify a number for sorted ports the Informatica Server adds an 
ORDER BY clause to the default SQL query. 
l Select only distinct values from the source. If you choose Select Distinct the Informatica Server 
adds a SELECT DISTINCT statement to the default SQL query. 
file:///C|/Perl/bin/result.html (50 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html 
l Create a custom query to issue a special SELECT statement for the Informatica Server to read 
source data. For example you might use a custom query to perform aggregate calculations or execute a 
stored procedure. 
Cheers 
Sithu 
======================================= 
When you add a relational or a flat file source definition to a mapping you need to connect it to a 
Source Qualifier transformation. The Source Qualifier represents the rows that the Informatica Server 
reads when it executes a session. 
Cheers 
Sithu 
======================================= 
Source qualifier is also a table it acts as an intermediator in between source and target metadata. And it 
also generates sql which creating mapping in between source and target metadatas. 
Thanks 
Rama Rao 
======================================= 
Def:- The Transformation which Converts the source(relational or flat) datatype to Informatica datatype. 
So it works as an intemediator between and source and informatica server. 
Tasks performed by qualifier transformation:- 
1. Join data originating from the same source database. 
2. Filter records when the Informatica Server reads source data. 
3. Specify an outer join rather than the default inner join. 
4. Specify sorted ports. 
5. Select only distinct values from the source. 
6. Create a custom query to issue a special SELECT statement for the Informatica Server to read source 
data. 
======================================= 
file:///C|/Perl/bin/result.html (51 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html 
Source Qualifier Transformation is the beginning of the pipeline of any transformation the main 
purpose of this transformation is that it is reading the data from the relational or flat file and is passing 
the data ie. read into the mapping designed so that the data can be passed into the other transformations 
======================================= 
Source Qualifier is a transformation with every source definiton if the source is Relational Database. 
Source Qualifier fires a Select statement on the source db. 
With every Source Definition u will get a source qualifier without Source qualifier u r mapping will be 
invalid and u cannot define the pipeline to the other instance. 
If the source is Cobol then for that source definition u will get a normalizer transormation not the 
Source Qualifier. 
======================================= 
41.Informatica - What r the tasks that source qualifier performs? 
QUESTION #41 Join data originating from same source data base. 
Filter records when the informatica server reads source data. 
Specify an outer join rather than the default inner join 
specify sorted records. 
Select only distinct values from the source. 
Creating custom query to issue a special SELECT statement for the informatica 
server to read 
source data. 
Click Here to view complete document 
No best answer available. Please pick the good answer available or submit your answer. 
January 24, 2006 03:42:08 #1 
sithusithu Member Since: December 2005 Contribution: 161 
RE: What r the tasks that source qualifier performs? 
file:///C|/Perl/bin/result.html (52 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html 
======================================= 
l Join data originating from the same source database. You can join two or more tables with 
primary-foreign key relationships by linking the sources to one Source Qualifier. 
l Filter records when the Informatica Server reads source data. If you include a filter condition the 
Informatica Server adds a WHERE clause to the default query. 
l Specify an outer join rather than the default inner join. If you include a user-defined join the 
Informatica Server replaces the join information specified by the metadata in the SQL query. 
l Specify sorted ports. If you specify a number for sorted ports the Informatica Server adds an 
ORDER BY clause to the default SQL query. 
l Select only distinct values from the source. If you choose Select Distinct the Informatica Server 
adds a SELECT DISTINCT statement to the default SQL query. 
l Create a custom query to issue a special SELECT statement for the Informatica Server to read 
source data. For example you might use a custom query to perform aggregate calculations or execute a 
stored procedure. 
Cheers 
Sithu 
======================================= 
42.Informatica - What is the target load order? 
QUESTION #42 U specify the target loadorder based on source qualifiers in a 
maping.If u have the multiple 
source qualifiers connected to the multiple targets,U can designatethe order in 
which informatica 
server loads data into the targets. 
Click Here to view complete document 
No best answer available. Please pick the good answer available or submit your answer. 
March 01, 2006 14:27:34 #1 
saritha 
RE: What is the target load order? 
file:///C|/Perl/bin/result.html (53 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html 
======================================= 
A target load order group is the collection of source qualifiers transformations and targets linked 
together in a mapping. 
======================================= 
43.Informatica - What is the default join that source qualifier 
provides? 
QUESTION #43 Inner equi join. 
Click Here to view complete document 
No best answer available. Please pick the good answer available or submit your answer. 
January 24, 2006 03:40:28 #1 
sithusithu Member Since: December 2005 Contribution: 161 
RE: What is the default join that source qualifier pro... 
======================================= 
The Joiner transformation supports the following join types which you set in the Properties tab: 
l Normal (Default) 
l Master Outer 
l Detail Outer 
l Full Outer 
Cheers 
Sithu 
======================================= 
Equijoin on a key common to the sources drawn by the SQ. 
======================================= 
44.Informatica - What r the basic needs to join two sources in a 
file:///C|/Perl/bin/result.html (54 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html 
source qualifier? 
QUESTION #44 Two sources should have primary and Foreign key relation 
ships. 
Two sources should have matching data types. 
Click Here to view complete document 
No best answer available. Please pick the good answer available or submit your answer. 
December 14, 2005 10:32:44 #1 
rishi 
RE: What r the basic needs to join two sources in a so... 
======================================= 
The both the table should have a common feild with same datatype. 
Its not neccessary both should follow primary and foreign relationship. If any relation ship exists that 
will help u in performance point of view. 
======================================= 
Also of you are using a lookup in your mapping and the lookup table is small then try to join that looup 
in Source Qualifier to improve perf. 
Regards 
SK 
======================================= 
Both the sources must be from same database. 
======================================= 
45.Informatica - what is update strategy transformation ? 
QUESTION #45 This transformation is used to maintain the history data or just 
most recent changes in to target 
table. 
file:///C|/Perl/bin/result.html (55 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html 
Click Here to view complete document 
No best answer available. Please pick the good answer available or submit your answer. 
January 19, 2006 04:33:23 #1 
sithusithu Member Since: December 2005 Contribution: 161 
RE: what is update strategy transformation ? 
======================================= 
The model you choose constitutes your update strategy how to handle changes to existing rows. In 
PowerCenter and PowerMart you set your update strategy at two different levels: 
l Within a session. When you configure a session you can instruct the Informatica Server to 
either treat all rows in the same way (for example treat all rows as inserts) or use instructions 
coded into the session mapping to flag rows for different database operations. 
l Within a mapping. Within a mapping you use the Update Strategy transformation to flag rows 
for insert delete update or reject. 
Chrees 
Sithu 
======================================= 
Update strategy transformation is used for flagging the records for insert 
update delete and reject 
In Informatica power center u can develop update strategy at two levels 
use update strategy T/R in the mapping design 
target table options in the session 
the following are the target table options 
Insert 
file:///C|/Perl/bin/result.html (56 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html 
Update 
Delete 
Update as Insert 
Update else Insert 
Thanks 
Rekha 
======================================= 
46.Informatica - What is the default source option for update 
stratgey transformation? 
QUESTION #46 Data driven. 
Click Here to view complete document 
No best answer available. Please pick the good answer available or submit your answer. 
March 28, 2006 05:03:53 #1 
Gyaneshwar 
RE: What is the default source option for update strat... 
======================================= 
DATA DRIVEN 
======================================= 
47.Informatica - What is Datadriven? 
QUESTION #47 The informatica server follows instructions coded into update 
strategy transformations with in the session maping determine how to flag 
records for insert, update, delete or reject. If u do not choose data driven option 
setting,the informatica server ignores all update strategy transformations in the 
mapping. 
Click Here to view complete document 
file:///C|/Perl/bin/result.html (57 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html 
No best answer available. Please pick the good answer available or submit your answer. 
January 19, 2006 04:36:22 #1 
sithusithu Member Since: December 2005 Contribution: 161 
RE: What is Datadriven? 
======================================= 
The Informatica Server follows instructions coded into Update Strategy transformations within the 
session mapping to determine how to flag rows for insert delete update or reject. 
If the mapping for the session contains an Update Strategy transformation this field is marked Data 
Driven by default. 
Cheers 
Sithu 
======================================= 
When Data driven option is selected in session properties it the code will consider the update strategy 
(DD_UPDATE DD_INSERT DD_DELETE DD_REJECT) used in the mapping and not the options 
selected in the session properties. 
======================================= 
48.Informatica - What r the options in the target session of update 
strategy transsformatioin? 
QUESTION #48 Insert 
Delete 
Update 
Update as update 
Update as insert 
Update esle insert 
Truncate table 
Click Here to view complete document 
No best answer available. Please pick the good answer available or submit your answer. 
February 03, 2006 03:46:07 #1 
Prasanna 
file:///C|/Perl/bin/result.html (58 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html 
RE: What r the options in the target session of update... 
======================================= 
Update as Insert: 
This option specified all the update records from source to be flagged as inserts in the target. In other 
words instead of updating the records in the target they are inserted as new records. 
Update else Insert: 
This option enables informatica to flag the records either for update if they are old or insert if they are 
new records from source. 
======================================= 
49.Informatica - What r the types of maping wizards that r to be 
provided in Informatica? 
QUESTION #49 The Designer provides two mapping wizards to help you create 
mappings quickly and easily. Both wizards are designed to create mappings for 
loading and maintaining star schemas, a series of dimensions related to a central 
fact table. 
Getting Started Wizard. Creates mappings to load static fact and dimension 
tables, as well as slowly growing dimension tables. 
Slowly Changing Dimensions Wizard. Creates mappings to load slowly 
changing dimension tables based on the amount of historical dimension data you 
want to keep and the method you choose to handle historical dimension data. 
Click Here to view complete document 
No best answer available. Please pick the good answer available or submit your answer. 
January 09, 2006 02:43:25 #1 
sithusithu Member Since: December 2005 Contribution: 161 
RE: What r the types of maping wizards that r to be pr... 
file:///C|/Perl/bin/result.html (59 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html 
======================================= 
Simple Pass through 
Slowly Growing Target 
Slowly Changing the Dimension 
Type1 
Most recent values 
Type2 
Full History 
Version 
Flag 
Date 
Type3 
Current and one previous 
======================================= 
Inf designer : 
Mapping -> wizards --> 1) Getting started -->Simple pass through mapping 
-->Slowly growing target 
2) slowly changing dimensions---> SCD 1 (only recent values) 
--->SCD 2(HISTORY using flag or version or time) 
--->SCD 3(just recent values) 
one important point is dimensions are 2 types 
1)slowly growing targets 
file:///C|/Perl/bin/result.html (60 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html 
2)slowly changing dimensions. 
======================================= 
50.Informatica - What r the types of maping in Getting Started 
Wizard? 
QUESTION #50 Simple Pass through maping : 
Loads a static fact or dimension table by inserting all rows. Use this mapping 
when you want to drop all existing data from your table before loading new 
data. 
Slowly Growing target : 
Loads a slowly growing fact or dimension table by inserting new rows. Use this 
mapping to load new data when existing data does not require updates. 
Click Here to view complete document 
No best answer available. Please pick the good answer available or submit your answer. 
January 09, 2006 02:46:25 #1 
sithusithu Member Since: December 2005 Contribution: 161 
RE: What r the types of maping in Getting Started Wiza... 
======================================= 
1. Simple Pass through2. Slowly Growing TargetCheers Sithu 
======================================= 
51.Informatica - What r the mapings that we use for slowly 
changing dimension table? 
QUESTION #51 Type1: Rows containing changes to existing dimensions are 
updated in the target by overwriting the existing dimension. In the Type 1 
Dimension mapping, all rows contain current dimension data. 
Use the Type 1 Dimension mapping to update a slowly changing dimension table 
when you do not need to keep any previous versions of dimensions in the table. 
Type 2: The Type 2 Dimension Data mapping inserts both new and changed 
dimensions into the target. Changes are tracked in the target table by versioning 
the primary key and creating a version number for each dimension in the table. 
Use the Type 2 Dimension/Version Data mapping to update a slowly changing 
file:///C|/Perl/bin/result.html (61 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html 
dimension table when you want to keep a full history of dimension data in the 
table. Version numbers and versioned primary keys track the order of changes to 
each dimension. 
Type 3: The Type 3 Dimension mapping filters source rows based on user-defined 
comparisons and inserts only those found to be new dimensions to the target. 
Rows containing changes to existing dimensions are updated in the target. When 
updating an existing dimension, the Informatica Server saves existing data in 
different columns of the same row and replaces the existing data with the 
updates 
Click Here to view complete document 
No best answer available. Please pick the good answer available or submit your answer. 
June 03, 2006 09:39:20 #1 
mamatha 
RE: What r the mapings that we use for slowly changing... 
======================================= 
hello sir 
i want whole information on slowly changing dimension.and also want project on slowly changing 
dimension in informatica. 
Thanking you sir 
mamatha. 
======================================= 
1.Up Date strategy Transfermation 
2.Look up Transfermation. 
======================================= 
file:///C|/Perl/bin/result.html (62 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html 
Type1: Rows containing changes to existing dimensions are updated in the target by overwriting the 
existing dimension. In the Type 1 Dimension mapping all rows contain current dimension data. Use the 
Type 1 Dimension mapping to update a slowly changing dimension table when you do not need to keep 
any previous versions of dimensions in the table. Type 2: The Type 2 Dimension Data mapping inserts 
both new and changed dimensions into the target. Changes are tracked in the target table by versioning 
the primary key and creating a version number for each dimension in the table. Use the Type 2 
Dimension/Version Data mapping to update a slowly changing dimension table when you want to keep 
a full history of dimension data in the table. Version numbers and versioned primary keys track the 
order of changes to each dimension. Type 3: The Type 3 Dimension mapping filters source rows based 
on user-defined comparisons and inserts only those found to be new dimensions to the target. Rows 
containing changes to existing dimensions are updated in the target. 
======================================= 
SCD: 
Source to SQ - 1 mapping 
SQ to LKP - 2 mapping 
SQ_LKP to EXP - 3 Mapping 
EXP to FTR - 4 Mapping 
FTR to UPD - 5 Mapping 
UPD to TGT - 6 Mapping 
SQGen to TGT - 7 Mapping. 
I think these are the 7 mapping used for SCD in general; 
For type 1: The mapping will be doubled that is one for insert and other for update and total as 14. 
For type 2 : The mapping will be increased thrice one for insert 2nd for update and 3 to keep the old 
one. (here the history stores) 
For type three : It will be doubled for insert one row and also to insert one column to keep the previous 
data. 
Cheers 
Prasath 
======================================= 
52.Informatica - What r the different types of Type2 dimension 
maping? 
QUESTION #52 Type2 Dimension/Version Data Maping: In this maping the 
updated dimension in the source will gets inserted in target along with a new 
version number.And newly added dimension 
file:///C|/Perl/bin/result.html (63 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html 
in source will inserted into target with a primary key. 
Type2 Dimension/Flag current Maping: This maping is also used for slowly 
changing dimensions.In addition it creates a flag value for changed or new 
dimension. 
Flag indiactes the dimension is new or newlyupdated.Recent dimensions will 
gets saved with cuurent flag value 1. And updated dimensions r saved with the 
value 0. 
Type2 Dimension/Effective Date Range Maping: This is also one flavour of Type2 
maping used for slowly changing dimensions.This maping also inserts both new 
and changed dimensions in to the target.And changes r tracked by the effective 
date range for each version of each dimension. 
Click Here to view complete document 
No best answer available. Please pick the good answer available or submit your answer. 
January 04, 2006 05:31:39 #1 
sithusithu Member Since: December 2005 Contribution: 161 
RE: What r the different types of Type2 dimension mapi... 
======================================= 
Type2 
1. Version number 
2. Flag 
3.Date 
Cheers 
Sithu 
======================================= 
file:///C|/Perl/bin/result.html (64 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html 
53.Informatica - How can u recognise whether or not the newly 
added rows in the source r gets insert in the target ? 
QUESTION #53 In the Type2 maping we have three options to recognise the 
newly added rows 
Version number 
Flagvalue 
Effective date Range 
Click Here to view complete document 
No best answer available. Please pick the good answer available or submit your answer. 
December 14, 2005 10:43:31 #1 
rishi 
RE: How can u recognise whether or not the newly added... 
======================================= 
If it is Type 2 Dimension the abouve answer is fine but if u want to get the info of all the insert 
statements and Updates you need to use session log file where you configure it to verbose. 
You will get complete set of data which record was inserted and which was not. 
======================================= 
Just use debugger to know how the data from source moves to target it will show how many new rows 
get inserted else updated. 
======================================= 
54.Informatica - What r two types of processes that informatica 
runs the session? 
QUESTION #54 Load manager Process: Starts the session, creates the DTM 
process, and sends post-session email when the session completes. 
The DTM process. Creates threads to initialize the session, read, write, and 
transform data, and handle pre- and post-session operations. 
Click Here to view complete document 
file:///C|/Perl/bin/result.html (65 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html 
No best answer available. Please pick the good answer available or submit your answer. 
September 17, 2007 08:17:02 #1 
rasmi Member Since: June 2007 Contribution: 20 
RE: What r two types of processes that informatica run... 
======================================= 
When the workflow start to run 
Then the informatica server process starts 
Two process:load manager process and DTM Process; 
The load manager process has the following tasks 
1. lock the workflow and read the properties of workflow 
2.create workflow log file 
3. start the all tasks in workfolw except session and worklet 
4.It starts the DTM Process. 
5.It will send the post session Email when the DTM abnormally terminated 
The DTM process involved in the following tasks 
1. read session properties 
2.create session log file 
3.create Threades such as master thread read write transformation threateds 
4.send post session Email. 
5.run the pre and post shell commands . 
6.run the pre and post stored procedures. 
======================================= 
55.Informatica - Can u generate reports in Informatcia? 
QUESTION #55 Yes. By using Metadata reporter we can generate reports in 
informatica. 
Click Here to view complete document 
No best answer available. Please pick the good answer available or submit your answer. 
January 19, 2006 05:05:46 #1 
sithusithu Member Since: December 2005 Contribution: 161 
file:///C|/Perl/bin/result.html (66 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html 
RE: Can u generate reports in Informatcia? 
======================================= 
It is a ETL tool you could not make reports from here but you can generate metadata report that is not 
going to be used for business analysis 
Cheers 
Sithu 
======================================= 
can u pls tell me how generate metadata reports? 
======================================= 
56.Informatica - Define maping and sessions? 
QUESTION #56 Maping: It is a set of source and target definitions linked by 
transformation objects that define the rules for transformation. 
Session : It is a set of instructions that describe how and when to move data 
from source to targets. 
Click Here to view complete document 
No best answer available. Please pick the good answer available or submit your answer. 
December 04, 2006 15:07:09 #1 
Pavani 
RE: Define maping and sessions? 
file:///C|/Perl/bin/result.html (67 of 363)4/1/2009 7:50:58 PM
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq
51881801 informatica-faq

Más contenido relacionado

La actualidad más candente

Accenture informatica interview question answers
Accenture informatica interview question answersAccenture informatica interview question answers
Accenture informatica interview question answers
Sweta Singh
 
Informatica data warehousing_job_interview_preparation_guide
Informatica data warehousing_job_interview_preparation_guideInformatica data warehousing_job_interview_preparation_guide
Informatica data warehousing_job_interview_preparation_guide
Dhanasekar T
 
Informatica question & answer set
Informatica question & answer setInformatica question & answer set
Informatica question & answer set
Prem Nath
 
Informatica object migration
Informatica object migrationInformatica object migration
Informatica object migration
Amit Sharma
 
Informatica complex transformation i
Informatica complex transformation iInformatica complex transformation i
Informatica complex transformation i
Amit Sharma
 
Datastage parallell jobs vs datastage server jobs
Datastage parallell jobs vs datastage server jobsDatastage parallell jobs vs datastage server jobs
Datastage parallell jobs vs datastage server jobs
shanker_uma
 
5(re dfd-erd-data dictionay)
5(re dfd-erd-data dictionay)5(re dfd-erd-data dictionay)
5(re dfd-erd-data dictionay)
randhirlpu
 

La actualidad más candente (20)

Accenture informatica interview question answers
Accenture informatica interview question answersAccenture informatica interview question answers
Accenture informatica interview question answers
 
Informatica data warehousing_job_interview_preparation_guide
Informatica data warehousing_job_interview_preparation_guideInformatica data warehousing_job_interview_preparation_guide
Informatica data warehousing_job_interview_preparation_guide
 
Informatica question & answer set
Informatica question & answer setInformatica question & answer set
Informatica question & answer set
 
Informatica object migration
Informatica object migrationInformatica object migration
Informatica object migration
 
Data analytics with R
Data analytics with RData analytics with R
Data analytics with R
 
Online Datastage training
Online Datastage trainingOnline Datastage training
Online Datastage training
 
Informatica complex transformation i
Informatica complex transformation iInformatica complex transformation i
Informatica complex transformation i
 
What is ETL testing & how to enforce it in Data Wharehouse
What is ETL testing & how to enforce it in Data WharehouseWhat is ETL testing & how to enforce it in Data Wharehouse
What is ETL testing & how to enforce it in Data Wharehouse
 
Datastage parallell jobs vs datastage server jobs
Datastage parallell jobs vs datastage server jobsDatastage parallell jobs vs datastage server jobs
Datastage parallell jobs vs datastage server jobs
 
Etl testing
Etl testingEtl testing
Etl testing
 
Etl Overview (Extract, Transform, And Load)
Etl Overview (Extract, Transform, And Load)Etl Overview (Extract, Transform, And Load)
Etl Overview (Extract, Transform, And Load)
 
5(re dfd-erd-data dictionay)
5(re dfd-erd-data dictionay)5(re dfd-erd-data dictionay)
5(re dfd-erd-data dictionay)
 
Teradata Aggregate Join Indices And Dimensional Models
Teradata Aggregate Join Indices And Dimensional ModelsTeradata Aggregate Join Indices And Dimensional Models
Teradata Aggregate Join Indices And Dimensional Models
 
Migration from 8.1 to 11.3
Migration from 8.1 to 11.3Migration from 8.1 to 11.3
Migration from 8.1 to 11.3
 
To Study E T L ( Extract, Transform, Load) Tools Specially S Q L Server I...
To Study  E T L ( Extract, Transform, Load) Tools Specially  S Q L  Server  I...To Study  E T L ( Extract, Transform, Load) Tools Specially  S Q L  Server  I...
To Study E T L ( Extract, Transform, Load) Tools Specially S Q L Server I...
 
Dwh faqs
Dwh faqsDwh faqs
Dwh faqs
 
Etl process in data warehouse
Etl process in data warehouseEtl process in data warehouse
Etl process in data warehouse
 
L08 Data Source Layer
L08 Data Source LayerL08 Data Source Layer
L08 Data Source Layer
 
Datastage to ODI
Datastage to ODIDatastage to ODI
Datastage to ODI
 
Data extraction, transformation, and loading
Data extraction, transformation, and loadingData extraction, transformation, and loading
Data extraction, transformation, and loading
 

Similar a 51881801 informatica-faq

Content Mirror
Content MirrorContent Mirror
Content Mirror
fravy
 
22827361 ab initio-fa-qs
22827361 ab initio-fa-qs22827361 ab initio-fa-qs
22827361 ab initio-fa-qs
Capgemini
 
Extending Apache Spark SQL Data Source APIs with Join Push Down with Ioana De...
Extending Apache Spark SQL Data Source APIs with Join Push Down with Ioana De...Extending Apache Spark SQL Data Source APIs with Join Push Down with Ioana De...
Extending Apache Spark SQL Data Source APIs with Join Push Down with Ioana De...
Databricks
 
Ssis Best Practices Israel Bi U Ser Group Itay Braun
Ssis Best Practices   Israel Bi U Ser Group   Itay BraunSsis Best Practices   Israel Bi U Ser Group   Itay Braun
Ssis Best Practices Israel Bi U Ser Group Itay Braun
sqlserver.co.il
 
Advance Sql Server Store procedure Presentation
Advance Sql Server Store procedure PresentationAdvance Sql Server Store procedure Presentation
Advance Sql Server Store procedure Presentation
Amin Uddin
 
MongoDB Replication and Sharding
MongoDB Replication and ShardingMongoDB Replication and Sharding
MongoDB Replication and Sharding
Tharun Srinivasa
 
MySQL Scaling Presentation
MySQL Scaling PresentationMySQL Scaling Presentation
MySQL Scaling Presentation
Tommy Falgout
 

Similar a 51881801 informatica-faq (20)

123448572 all-in-one-informatica
123448572 all-in-one-informatica123448572 all-in-one-informatica
123448572 all-in-one-informatica
 
Content Mirror
Content MirrorContent Mirror
Content Mirror
 
22827361 ab initio-fa-qs
22827361 ab initio-fa-qs22827361 ab initio-fa-qs
22827361 ab initio-fa-qs
 
Extending Apache Spark SQL Data Source APIs with Join Push Down with Ioana De...
Extending Apache Spark SQL Data Source APIs with Join Push Down with Ioana De...Extending Apache Spark SQL Data Source APIs with Join Push Down with Ioana De...
Extending Apache Spark SQL Data Source APIs with Join Push Down with Ioana De...
 
[2D1]Elasticsearch 성능 최적화
[2D1]Elasticsearch 성능 최적화[2D1]Elasticsearch 성능 최적화
[2D1]Elasticsearch 성능 최적화
 
[2 d1] elasticsearch 성능 최적화
[2 d1] elasticsearch 성능 최적화[2 d1] elasticsearch 성능 최적화
[2 d1] elasticsearch 성능 최적화
 
Ssis Best Practices Israel Bi U Ser Group Itay Braun
Ssis Best Practices   Israel Bi U Ser Group   Itay BraunSsis Best Practices   Israel Bi U Ser Group   Itay Braun
Ssis Best Practices Israel Bi U Ser Group Itay Braun
 
data stage-material
data stage-materialdata stage-material
data stage-material
 
Advance Sql Server Store procedure Presentation
Advance Sql Server Store procedure PresentationAdvance Sql Server Store procedure Presentation
Advance Sql Server Store procedure Presentation
 
Apache Spark 3.0: Overview of What’s New and Why Care
Apache Spark 3.0: Overview of What’s New and Why CareApache Spark 3.0: Overview of What’s New and Why Care
Apache Spark 3.0: Overview of What’s New and Why Care
 
MongoDB Replication and Sharding
MongoDB Replication and ShardingMongoDB Replication and Sharding
MongoDB Replication and Sharding
 
Deployment with ExpressionEngine
Deployment with ExpressionEngineDeployment with ExpressionEngine
Deployment with ExpressionEngine
 
Oracle tutorial
Oracle tutorialOracle tutorial
Oracle tutorial
 
DBM 380 Invent Yourself/newtonhelp.com
DBM 380 Invent Yourself/newtonhelp.comDBM 380 Invent Yourself/newtonhelp.com
DBM 380 Invent Yourself/newtonhelp.com
 
Spring data jpa are used to develop spring applications
Spring data jpa are used to develop spring applicationsSpring data jpa are used to develop spring applications
Spring data jpa are used to develop spring applications
 
Whitepaper : Working with Greenplum Database using Toad for Data Analysts
Whitepaper : Working with Greenplum Database using Toad for Data Analysts Whitepaper : Working with Greenplum Database using Toad for Data Analysts
Whitepaper : Working with Greenplum Database using Toad for Data Analysts
 
MySQL Scaling Presentation
MySQL Scaling PresentationMySQL Scaling Presentation
MySQL Scaling Presentation
 
De-duplicated Refined Zone in Healthcare Data Lake Using Big Data Processing ...
De-duplicated Refined Zone in Healthcare Data Lake Using Big Data Processing ...De-duplicated Refined Zone in Healthcare Data Lake Using Big Data Processing ...
De-duplicated Refined Zone in Healthcare Data Lake Using Big Data Processing ...
 
Testing Delphix: easy data virtualization
Testing Delphix: easy data virtualizationTesting Delphix: easy data virtualization
Testing Delphix: easy data virtualization
 
Ebook9
Ebook9Ebook9
Ebook9
 

Último

Call Girls In Hsr Layout ☎ 7737669865 🥵 Book Your One night Stand
Call Girls In Hsr Layout ☎ 7737669865 🥵 Book Your One night StandCall Girls In Hsr Layout ☎ 7737669865 🥵 Book Your One night Stand
Call Girls In Hsr Layout ☎ 7737669865 🥵 Book Your One night Stand
amitlee9823
 
➥🔝 7737669865 🔝▻ Mathura Call-girls in Women Seeking Men 🔝Mathura🔝 Escorts...
➥🔝 7737669865 🔝▻ Mathura Call-girls in Women Seeking Men  🔝Mathura🔝   Escorts...➥🔝 7737669865 🔝▻ Mathura Call-girls in Women Seeking Men  🔝Mathura🔝   Escorts...
➥🔝 7737669865 🔝▻ Mathura Call-girls in Women Seeking Men 🔝Mathura🔝 Escorts...
amitlee9823
 
Call Girls Bannerghatta Road Just Call 👗 7737669865 👗 Top Class Call Girl Ser...
Call Girls Bannerghatta Road Just Call 👗 7737669865 👗 Top Class Call Girl Ser...Call Girls Bannerghatta Road Just Call 👗 7737669865 👗 Top Class Call Girl Ser...
Call Girls Bannerghatta Road Just Call 👗 7737669865 👗 Top Class Call Girl Ser...
amitlee9823
 
CHEAP Call Girls in Rabindra Nagar (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
CHEAP Call Girls in Rabindra Nagar  (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICECHEAP Call Girls in Rabindra Nagar  (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
CHEAP Call Girls in Rabindra Nagar (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
9953056974 Low Rate Call Girls In Saket, Delhi NCR
 
CHEAP Call Girls in Saket (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
CHEAP Call Girls in Saket (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICECHEAP Call Girls in Saket (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
CHEAP Call Girls in Saket (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
9953056974 Low Rate Call Girls In Saket, Delhi NCR
 
Call Girls Indiranagar Just Call 👗 9155563397 👗 Top Class Call Girl Service B...
Call Girls Indiranagar Just Call 👗 9155563397 👗 Top Class Call Girl Service B...Call Girls Indiranagar Just Call 👗 9155563397 👗 Top Class Call Girl Service B...
Call Girls Indiranagar Just Call 👗 9155563397 👗 Top Class Call Girl Service B...
only4webmaster01
 
Jual Obat Aborsi Surabaya ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Surabaya ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...Jual Obat Aborsi Surabaya ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Surabaya ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
ZurliaSoop
 
Call Girls In Nandini Layout ☎ 7737669865 🥵 Book Your One night Stand
Call Girls In Nandini Layout ☎ 7737669865 🥵 Book Your One night StandCall Girls In Nandini Layout ☎ 7737669865 🥵 Book Your One night Stand
Call Girls In Nandini Layout ☎ 7737669865 🥵 Book Your One night Stand
amitlee9823
 
Junnasandra Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore...
Junnasandra Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore...Junnasandra Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore...
Junnasandra Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore...
amitlee9823
 
Call Girls Bommasandra Just Call 👗 7737669865 👗 Top Class Call Girl Service B...
Call Girls Bommasandra Just Call 👗 7737669865 👗 Top Class Call Girl Service B...Call Girls Bommasandra Just Call 👗 7737669865 👗 Top Class Call Girl Service B...
Call Girls Bommasandra Just Call 👗 7737669865 👗 Top Class Call Girl Service B...
amitlee9823
 
Chintamani Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore ...
Chintamani Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore ...Chintamani Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore ...
Chintamani Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore ...
amitlee9823
 

Último (20)

Call Girls In Hsr Layout ☎ 7737669865 🥵 Book Your One night Stand
Call Girls In Hsr Layout ☎ 7737669865 🥵 Book Your One night StandCall Girls In Hsr Layout ☎ 7737669865 🥵 Book Your One night Stand
Call Girls In Hsr Layout ☎ 7737669865 🥵 Book Your One night Stand
 
April 2024 - Crypto Market Report's Analysis
April 2024 - Crypto Market Report's AnalysisApril 2024 - Crypto Market Report's Analysis
April 2024 - Crypto Market Report's Analysis
 
➥🔝 7737669865 🔝▻ Mathura Call-girls in Women Seeking Men 🔝Mathura🔝 Escorts...
➥🔝 7737669865 🔝▻ Mathura Call-girls in Women Seeking Men  🔝Mathura🔝   Escorts...➥🔝 7737669865 🔝▻ Mathura Call-girls in Women Seeking Men  🔝Mathura🔝   Escorts...
➥🔝 7737669865 🔝▻ Mathura Call-girls in Women Seeking Men 🔝Mathura🔝 Escorts...
 
Detecting Credit Card Fraud: A Machine Learning Approach
Detecting Credit Card Fraud: A Machine Learning ApproachDetecting Credit Card Fraud: A Machine Learning Approach
Detecting Credit Card Fraud: A Machine Learning Approach
 
BDSM⚡Call Girls in Mandawali Delhi >༒8448380779 Escort Service
BDSM⚡Call Girls in Mandawali Delhi >༒8448380779 Escort ServiceBDSM⚡Call Girls in Mandawali Delhi >༒8448380779 Escort Service
BDSM⚡Call Girls in Mandawali Delhi >༒8448380779 Escort Service
 
Call Girls Bannerghatta Road Just Call 👗 7737669865 👗 Top Class Call Girl Ser...
Call Girls Bannerghatta Road Just Call 👗 7737669865 👗 Top Class Call Girl Ser...Call Girls Bannerghatta Road Just Call 👗 7737669865 👗 Top Class Call Girl Ser...
Call Girls Bannerghatta Road Just Call 👗 7737669865 👗 Top Class Call Girl Ser...
 
CHEAP Call Girls in Rabindra Nagar (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
CHEAP Call Girls in Rabindra Nagar  (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICECHEAP Call Girls in Rabindra Nagar  (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
CHEAP Call Girls in Rabindra Nagar (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
 
CHEAP Call Girls in Saket (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
CHEAP Call Girls in Saket (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICECHEAP Call Girls in Saket (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
CHEAP Call Girls in Saket (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
 
Call Girls Indiranagar Just Call 👗 9155563397 👗 Top Class Call Girl Service B...
Call Girls Indiranagar Just Call 👗 9155563397 👗 Top Class Call Girl Service B...Call Girls Indiranagar Just Call 👗 9155563397 👗 Top Class Call Girl Service B...
Call Girls Indiranagar Just Call 👗 9155563397 👗 Top Class Call Girl Service B...
 
Anomaly detection and data imputation within time series
Anomaly detection and data imputation within time seriesAnomaly detection and data imputation within time series
Anomaly detection and data imputation within time series
 
Jual Obat Aborsi Surabaya ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Surabaya ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...Jual Obat Aborsi Surabaya ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Surabaya ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
 
Call me @ 9892124323 Cheap Rate Call Girls in Vashi with Real Photo 100% Secure
Call me @ 9892124323  Cheap Rate Call Girls in Vashi with Real Photo 100% SecureCall me @ 9892124323  Cheap Rate Call Girls in Vashi with Real Photo 100% Secure
Call me @ 9892124323 Cheap Rate Call Girls in Vashi with Real Photo 100% Secure
 
Call Girls In Nandini Layout ☎ 7737669865 🥵 Book Your One night Stand
Call Girls In Nandini Layout ☎ 7737669865 🥵 Book Your One night StandCall Girls In Nandini Layout ☎ 7737669865 🥵 Book Your One night Stand
Call Girls In Nandini Layout ☎ 7737669865 🥵 Book Your One night Stand
 
Junnasandra Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore...
Junnasandra Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore...Junnasandra Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore...
Junnasandra Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore...
 
(NEHA) Call Girls Katra Call Now 8617697112 Katra Escorts 24x7
(NEHA) Call Girls Katra Call Now 8617697112 Katra Escorts 24x7(NEHA) Call Girls Katra Call Now 8617697112 Katra Escorts 24x7
(NEHA) Call Girls Katra Call Now 8617697112 Katra Escorts 24x7
 
DATA SUMMIT 24 Building Real-Time Pipelines With FLaNK
DATA SUMMIT 24  Building Real-Time Pipelines With FLaNKDATA SUMMIT 24  Building Real-Time Pipelines With FLaNK
DATA SUMMIT 24 Building Real-Time Pipelines With FLaNK
 
Call Girls Bommasandra Just Call 👗 7737669865 👗 Top Class Call Girl Service B...
Call Girls Bommasandra Just Call 👗 7737669865 👗 Top Class Call Girl Service B...Call Girls Bommasandra Just Call 👗 7737669865 👗 Top Class Call Girl Service B...
Call Girls Bommasandra Just Call 👗 7737669865 👗 Top Class Call Girl Service B...
 
Midocean dropshipping via API with DroFx
Midocean dropshipping via API with DroFxMidocean dropshipping via API with DroFx
Midocean dropshipping via API with DroFx
 
Chintamani Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore ...
Chintamani Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore ...Chintamani Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore ...
Chintamani Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore ...
 
Cheap Rate Call girls Sarita Vihar Delhi 9205541914 shot 1500 night
Cheap Rate Call girls Sarita Vihar Delhi 9205541914 shot 1500 nightCheap Rate Call girls Sarita Vihar Delhi 9205541914 shot 1500 night
Cheap Rate Call girls Sarita Vihar Delhi 9205541914 shot 1500 night
 

51881801 informatica-faq

  • 1. file:///C|/Perl/bin/result.html INFORMATICA INTERVIEW QUESTIONS - Extracted from GeekInterview.com by Deepak Babu http://prdeepakbabu.wordpress.com DISCALIMER: The questions / data available here are from geekinterview.com. It has been compiled to single document for the ease of browsing through the informatica relevant questions. For any details, please refer www.geekinterview.com. We are not responsible for any data inaccuracy. 1.Informatica - Why we use lookup transformations? QUESTION #1 Lookup Transformations can access data from relational tables that are not sources in mapping. With Lookup transformation, we can accomplish the following tasks: Get a related value-Get the Employee Name from Employee table based on the Employee IDPerform Calculation. Update slowly changing dimension tables - We can use unconnected lookup transformation to determine whether the records already exist in the target or not. Click Here to view complete document No best answer available. Please pick the good answer available or submit your answer. January 19, 2006 01:12:33 #1 sithusithu Member Since: December 2005 Contribution: 161 RE: Why we use lookup transformations? ======================================= Nice Question If we don't have a look our datawarehouse will be have more unwanted duplicates Use a Lookup transformation in your mapping to look up data in a relational table view or synonym. Import a lookup definition from any relational database to which both the Informatica Client and Server can connect. You can use multiple Lookup transformations in a mapping Cheers Sithu file:///C|/Perl/bin/result.html (1 of 363)4/1/2009 7:50:58 PM
  • 2. file:///C|/Perl/bin/result.html ======================================= Lookup Transformations used to search data from relational tables/FLAT Files that are not used in mapping. Types of Lookup: 1. Connected Lookup 2. UnConnected Lookup ======================================= The main use of lookup is to get a related value either from a relational sources or flat files ======================================= The following reasons for using lookups..... 1)We use Lookup transformations that query the largest amounts of data to improve overall performance. By doing that we can reduce the number of lookups on the same table. 2)If a mapping contains Lookup transformations we will enable lookup caching if this option is not enabled . We will use a persistent cache to improve performance of the lookup whenever possible. We will explore the possibility of using concurrent caches to improve session performance. We will use the Lookup SQL Override option to add a WHERE clause to the default SQL statement if it is not defined We will add ORDER BY clause in lookup SQL statement if there is no order by defined. We will use SQL override to suppress the default ORDER BY statement and enter an override ORDER BY with fewer columns. Indexing the Lookup Table We can improve performance for the following types of lookups: For cached lookups we will index the lookup table using the columns in the lookup ORDER BY statement. For Un-cached lookups we will Index the lookup table using the columns in the lookup where condition. file:///C|/Perl/bin/result.html (2 of 363)4/1/2009 7:50:58 PM
  • 3. file:///C|/Perl/bin/result.html 3)In some cases we use lookup instead of Joiner as lookup is faster than joiner in some cases when lookup contains the master data only. 4)This lookup helps in terms of performance tuning of the mappings also. ======================================= Look up Transformation is like a set of Reference for the traget table.For example suppose you are travelling by an auto ricksha..In the morning you notice that the auto driver showing you some card and saying that today onwards there is a hike in petrol.so you have to pay more. So the card which he is showing is a set of reference for there costumer..In the same way the lookup transformation works. These are of 2 types : a) Connected Lookup b) Un-connected lookup Connected lookup is connected in a single pipeline from a source to a target where as Un Connected Lookup is isolated with in the mapping and is called with the help of a Expression Transformation. ======================================= Look up tranformations are used to Get a related value Updating slowly changing dimension Caluculating expressions ======================================= 2.Informatica - While importing the relational source definition from database, what are the meta data of source U i QUESTION #2 Source name Database location Column names Datatypes Key constraints Click Here to view complete document No best answer available. Please pick the good answer available or submit your answer. file:///C|/Perl/bin/result.html (3 of 363)4/1/2009 7:50:58 PM
  • 4. file:///C|/Perl/bin/result.html September 28, 2006 06:30:08 #1 srinvas vadlakonda RE: While importing the relational source defintion fr... ======================================= source name data types key constraints database location ======================================= Relational sources are tables views synonyms. Source name Database location Column name Datatype Key Constraints. For synonyms you will have to manually create the constraints. ======================================= 3.Informatica - How many ways you can update a relational source defintion and what r they? QUESTION #3 Two ways 1. Edit the definition 2. Reimport the defintion Click Here to view complete document No best answer available. Please pick the good answer available or submit your answer. January 30, 2006 04:59:06 #1 gazulas Member Since: January 2006 Contribution: 17 RE: How many ways you can update a relational source d... ======================================= in 2 ways we can do it 1) by reimport the source definition 2) by edit the source definition ======================================= 4.Informatica - Where should U place the flat file to import the file:///C|/Perl/bin/result.html (4 of 363)4/1/2009 7:50:58 PM
  • 5. file:///C|/Perl/bin/result.html flat file defintion to the designer? QUESTION #4 Place it in local folder Click Here to view complete document No best answer available. Please pick the good answer available or submit your answer. December 13, 2005 08:42:59 #1 rishi RE: Where should U place the flat file to import the f... ======================================= There is no such restrication to place the source file. In performance point of view its better to place the file in server local src folder. if you need path please check the server properties availble at workflow manager. It doesn't mean we should not place in any other folder if we place in server src folder by default src will be selected at time session creation. ======================================= file must be in a directory local to the client machine. ======================================= Basically the flat file should be stored in the src folder in the informatica server folder. Now logically it should pick up the file from any location but it gives an error of invalid identifier or not able to read the first row. So its better to keep the file in the src folder.which is already created when the informatica is installed ======================================= We can place source file any where in network but it will consume more time to fetch data from source file but if the source file is present on server srcfile then it will fetch data from source upto 25 times faster than previous. ======================================= 5.Informatica - To provide support for Mainframes source data, which files r used as a source definitions? QUESTION #5 COBOL files Click Here to view complete document file:///C|/Perl/bin/result.html (5 of 363)4/1/2009 7:50:58 PM
  • 6. file:///C|/Perl/bin/result.html No best answer available. Please pick the good answer available or submit your answer. October 07, 2005 11:49:42 #1 Shaks Krishnamurthy RE: To provide support for Mainframes source data,whic... ======================================= COBOL Copy-book files ======================================= The mainframe files are Used as VSAM files in Informatica by using the Normaliser transformation ======================================= 6.Informatica - Which transformation should u need while using the cobol sources as source defintions? QUESTION #6 Normalizer transformaiton which is used to normalize the data. Since cobol sources r oftenly consists of Denormailzed data. Click Here to view complete document Submitted by: sithusithu Normalizer transformaiton Cheers, Sithu Above answer was rated as good by the following members: ramonasiraj ======================================= Normalizer transformaiton Cheers Sithu file:///C|/Perl/bin/result.html (6 of 363)4/1/2009 7:50:58 PM
  • 7. file:///C|/Perl/bin/result.html ======================================= Normalizer transformaiton which is used to normalize the data ======================================= 7.Informatica - How can U create or import flat file definition in to the warehouse designer? QUESTION #7 U can not create or import flat file defintion in to warehouse designer directly.Instead U must analyze the file in source analyzer,then drag it into the warehouse designer.When U drag the flat file source defintion into warehouse desginer workspace,the warehouse designer creates a relational target defintion not a file defintion.If u want to load to a file,configure the session to write to a flat file.When the informatica server runs the session,it creates and loads the flatfile. Click Here to view complete document No best answer available. Please pick the good answer available or submit your answer. August 22, 2005 03:23:12 #1 Praveen RE: How can U create or import flat file definition in to the warehouse designer? ======================================= U can create flat file definition in warehouse designer.in the warehouse designer u can create new target: select the type as flat file. save it and u can enter various columns for that created target by editing its properties.Once the target is created save it. u can import it from the mapping designer. ======================================= Yes you can import flat file directly into Warehouse designer. This way it will import the field definitions directly. ======================================= 1) Manually create the flat file target definition in warehouse designer 2)create target definition from a source definition. This is done my dropping a source definition in warehouse designer. 3)Import flat file definitionusing a flat file wizard. ( file must be local to the client machine) ======================================= While creating flatfiles manually we drag and drop the structure from SQ if the structure we need is the same as of source for this we need to check-in the source and then drag and drop it into the Flatfile if file:///C|/Perl/bin/result.html (7 of 363)4/1/2009 7:50:58 PM
  • 8. file:///C|/Perl/bin/result.html not all the columns in the source will be changed as primary keys. ======================================= 8.Informatica - What is the maplet? QUESTION #8 Maplet is a set of transformations that you build in the maplet designer and U can use in multiple mapings. Click Here to view complete document No best answer available. Please pick the good answer available or submit your answer. December 08, 2005 23:38:47 #1 phani RE: What is the maplet? ======================================= For Ex:Suppose we have several fact tables that require a series of dimension keys.Then we can create a mapplet which contains a series of Lkp transformations to find each dimension key and use it in each fact table mapping instead of creating the same Lkp logic in each mapping. ======================================= Part(sub set) of the Mapping is known as Mapplet Cheers Sithu ======================================= Set of transforamations where the logic can be reusble ======================================= A mapplet should have a mapplet input transformation which recives input values and a output transformation which passes the final modified data to back to the mapping. when the mapplet is displayed with in the mapping only input & output ports are displayed so that the internal logic is hidden from end-user point of view. ======================================= file:///C|/Perl/bin/result.html (8 of 363)4/1/2009 7:50:58 PM
  • 9. file:///C|/Perl/bin/result.html Reusable mapping is known as mapplet & reusable transformation with mapplet ======================================= Maplet is a reusable business logic which can be used in mappings ======================================= A maplet is a reusable object which contains one or more than one transformation which is used to populate the data from source to target based on the business logic and we can use the same logic in different mappings without creating the mapping again. ======================================= Mapplet Is In Mapplet Designer It is used to create Mapplets. ======================================= A mapplet is a reusable object that represents a set of transformations. Mapplet can be designed using mapping designer in informatica power center ======================================= Basically mapplet is a subset of the mapping in which we can have the information of the each dimension keys by keeping the different mappings created individually. If we want a series of dimension keys in the final fact table we will use mapping designer. ======================================= 9.Informatica - what is a transforamation? QUESTION #9 It is a repostitory object that generates,modifies or passes data. Click Here to view complete document No best answer available. Please pick the good answer available or submit your answer. November 23, 2005 16:06:23 #1 sir RE: what is a transforamation? ======================================= a transformation is repository object that pass data to the next stage(i.e to the next transformation or target) with/with out modifying the data ======================================= It is a process of converting given input to desired output. ======================================= set of operation Cheers Sithu file:///C|/Perl/bin/result.html (9 of 363)4/1/2009 7:50:58 PM
  • 10. file:///C|/Perl/bin/result.html ======================================= Transformation is a repository object of converting a given input to desired output.It can generates modifies and passes the data. ======================================= A TransFormation Is a Repository Object. That Generates Modifies or Passes Data. The Designer Provides a Set of Transformations That perform Specific Functions. For Example An AGGREGATOR Transformation Performs Calculations On Groups Of Data. ======================================= 10.Informatica - What r the designer tools for creating tranformations? QUESTION #10 Mapping designer Tansformation developer Mapplet designer Click Here to view complete document No best answer available. Please pick the good answer available or submit your answer. February 21, 2007 05:29:40 #1 MANOJ KUMAR PANIGRAHI RE: What r the designer tools for creating tranformati... ======================================= There r 2 type of tool r used 4 creating transformation......just like Mapping designer Mapplet designer ======================================= Mapping Designer Maplet Designer Transformation Deveoper - for reusable transformation ======================================= 11.Informatica - What r the active and passive transforamtions? QUESTION #11 An active transforamtion can change the number of rows that file:///C|/Perl/bin/result.html (10 of 363)4/1/2009 7:50:58 PM
  • 11. file:///C|/Perl/bin/result.html pass through it.A passive transformation does not change the number of rows that pass through it. Click Here to view complete document No best answer available. Please pick the good answer available or submit your answer. January 24, 2006 03:32:14 #1 sithusithu Member Since: December 2005 Contribution: 161 RE: What r the active and passive transforamtions? ======================================= Transformations can be active or passive. An active transformation can change the number of rows that pass through it such as a Filter transformation that removes rows that do not meet the filter condition. A passive transformation does not change the number of rows that pass through it such as an Expression transformation that performs a calculation on data and passes all rows through the transformation. Cheers Sithu ======================================= Active Transformation : A Transformation which change the number of rows when data is flowing from source to target Passive Transformation : A transformation which does not change the number of rows when the data is flowing from source to target ======================================= 12.Informatica - What r the connected or unconnected transforamations? QUESTION #12 An unconnected transforamtion is not connected to other transformations in the mapping.Connected transforamation is connected to other transforamtions in the mapping. Click Here to view complete document file:///C|/Perl/bin/result.html (11 of 363)4/1/2009 7:50:58 PM
  • 12. file:///C|/Perl/bin/result.html No best answer available. Please pick the good answer available or submit your answer. August 22, 2005 03:26:32 #1 Praveen RE: What r the connected or unconnected transforamations? ======================================= An unconnected transformation cant be connected to another transformation. but it can be called inside another transformation. ======================================= Here is the deal Connected transformation is a part of your data flow in the pipeline while unconnected Transformation is not. much like calling a program by name and by reference. use unconnected transforms when you wanna call the same transform many times in a single mapping. ======================================= In addition to first answer uncondition transformation are directly connected and can/used in as many as other transformations. If you are using a transformation several times use unconditional. You get better performance. ======================================= Connect Transformation : A transformation which participates in the mapping data flow Connected transformation can receive multiple inputs and provides multiple outputs Unconnected : A unconnected transformation does not participate in the mapping data flow It can receive multiple inputs and provides single output file:///C|/Perl/bin/result.html (12 of 363)4/1/2009 7:50:58 PM
  • 13. file:///C|/Perl/bin/result.html Thanks Rekha ======================================= 13.Informatica - How many ways u create ports? QUESTION #13 Two ways 1.Drag the port from another transforamtion 2.Click the add buttion on the ports tab. Click Here to view complete document No best answer available. Please pick the good answer available or submit your answer. September 28, 2006 06:31:21 #1 srinivas.vadlakonda RE: How many ways u create ports? ======================================= Two ways 1.Drag the port from another transforamtion 2.Click the add buttion on the ports tab. ======================================= we can copy and paste the ports in the ports tab ======================================= 14.Informatica - What r the reusable transforamtions? QUESTION #14 Reusable transformations can be used in multiple mappings. When u need to incorporate this transformation into maping,U add an instance of it to maping.Later if U change the definition of the transformation ,all instances of it inherit the changes.Since the instance of reusable transforamation is a pointer to that transforamtion,U can change the transforamation in the transformation developer,its instances automatically reflect these changes.This feature can save U great deal of work. Click Here to view complete document Submitted by: sithusithu file:///C|/Perl/bin/result.html (13 of 363)4/1/2009 7:50:58 PM
  • 14. file:///C|/Perl/bin/result.html A transformation can reused, that is know as reusable transformation You can design using 2 methods 1. using transformation developer 2. create normal one and promote it to reusable Cheers Sithu Above answer was rated as good by the following members: ramonasiraj ======================================= A transformation can reused that is know as reusable transformation You can design using 2 methods 1. using transformation developer 2. create normal one and promote it to reusable Cheers Sithu ======================================= Hai to all friends out there the transformation that can be reused is called a reusable transformation. as the property suggests it has to be reused: so for reusing we can do it in two different ways 1) by creating normal transformation and making it reusable by checking it in the check box of the properties of the edit transformation. 2) by using transformation developer here what ever transformation is developed it is reusable and it can be used in mapping designer where we can further change its properties as per our requirement. file:///C|/Perl/bin/result.html (14 of 363)4/1/2009 7:50:58 PM
  • 15. file:///C|/Perl/bin/result.html ======================================= 1. A reusable transformation can be used in multiple transformations 2.The designer stores each reusable transformation as metada separate from any mappings that use the transformation. 3. Every reusable transformation falls within a category of transformations available in the Designer 4.one can only create an External Procedure transformation as a reusable transformation. ======================================= 15.Informatica - What r the methods for creating reusable transforamtions? QUESTION #15 Two methods 1.Design it in the transformation developer. 2.Promote a standard transformation from the mapping designer.After U add a transformation to the mapping , U can promote it to the status of reusable transformation. Once U promote a standard transformation to reusable status,U can demote it to a standard transformation at any time. If u change the properties of a reusable transformation in mapping,U can revert it to the original reusable transformation properties by clicking the revert button. Click Here to view complete document No best answer available. Please pick the good answer available or submit your answer. September 12, 2005 12:22:21 #1 Praveen Vasudev RE: methods for creating reusable transforamtions? file:///C|/Perl/bin/result.html (15 of 363)4/1/2009 7:50:58 PM
  • 16. file:///C|/Perl/bin/result.html ======================================= PLEASE THINK TWICE BEFORE YOU POST AN ANSWER. Answer: Two methods 1.Design it in the transformation developer. by default its a reusable transform. 2.Promote a standard transformation from the mapping designer.After U add a transformation to the mapping U can promote it to the status of reusable transformation. Once U promote a standard transformation to reusable status U CANNOT demote it to a standard transformation at any time. If u change the properties of a reusable transformation in mapping U can revert it to the original reusable transformation properties by clicking the revert button. ======================================= You can design using 2 methods 1. using transformation developer 2. create normal one and promote it to reusable Cheers Sithu ======================================= 16.Informatica - What r the unsupported repository objects for a mapplet? QUESTION #16 COBOL source definition Joiner transformations Normalizer transformations Non reusable sequence generator transformations. Pre or post session stored procedures Target defintions Power mart 3.5 style Look Up functions XML source definitions IBM MQ source defintions file:///C|/Perl/bin/result.html (16 of 363)4/1/2009 7:50:58 PM
  • 17. file:///C|/Perl/bin/result.html Click Here to view complete document No best answer available. Please pick the good answer available or submit your answer. January 19, 2006 04:23:12 #1 sithusithu Member Since: December 2005 Contribution: 161 RE: What r the unsupported repository objects for a ma... ======================================= l Source definitions. Definitions of database objects (tables views synonyms) or files that provide source data. l Target definitions. Definitions of database objects or files that contain the target data. l Multi-dimensional metadata. Target definitions that are configured as cubes and dimensions. l Mappings. A set of source and target definitions along with transformations containing business logic that you build into the transformation. These are the instructions that the Informatica Server uses to transform and move data. l Reusable transformations. Transformations that you can use in multiple mappings. l Mapplets. A set of transformations that you can use in multiple mappings. l Sessions and workflows. Sessions and workflows store information about how and when the Informatica Server moves data. A workflow is a set of instructions that describes how and when to run tasks related to extracting transforming and loading data. A session is a type of task that you can put in a workflow. Each session corresponds to a single mapping. Cheers Sithu ======================================= Hi The following answer is from Informatica Help Documnetation l You cannot include the following objects in a mapplet: l Normalizer transformations l Cobol sources l XML Source Qualifier transformations l XML sources file:///C|/Perl/bin/result.html (17 of 363)4/1/2009 7:50:58 PM
  • 18. file:///C|/Perl/bin/result.html l Target definitions l Pre- and post- session stored procedures l Other mapplets Shivaji Thaneru ======================================= normaliser xml source qualifier and cobol sources cannot be used ======================================= -Normalizer transformations -Cobol sources -XML Source Qualifier transformations -XML sources -Target definitions -Pre- and post- session stored procedures -Other mapplets -PowerMart 3.5-style LOOKUP functions -non reusable sequence generator ======================================= 17.Informatica - What r the mapping paramaters and maping variables? QUESTION #17 Maping parameter represents a constant value that U can define before running a session.A mapping parameter retains the same value throughout the entire session. When u use the maping parameter ,U declare and use the parameter in a maping or maplet.Then define the value of parameter in a parameter file for the session. Unlike a mapping parameter,a maping variable represents a value that can change throughout the session.The informatica server saves the value of maping variable to the repository at the end of session run and uses that value next time U run the session. Click Here to view complete document No best answer available. Please pick the good answer available or submit your answer. September 12, 2005 12:30:13 #1 Praveen Vasudev file:///C|/Perl/bin/result.html (18 of 363)4/1/2009 7:50:58 PM
  • 19. file:///C|/Perl/bin/result.html RE: mapping varibles ======================================= Please refer to the documentation for more understanding. Mapping variables have two identities: Start value and Current value Start value Current value ( when the session starts the execution of the undelying mapping) Start value <> Current value ( while the session is in progress and the variable value changes in one ore more occasions) Current value at the end of the session is nothing but the start value for the subsequent run of the same session. ======================================= You can use mapping parameters and variables in the SQL query user-defined join and source filter of a Source Qualifier transformation. You can also use the system variable $$$SessStartTime. The Informatica Server first generates an SQL query and scans the query to replace each mapping parameter or variable with its start value. Then it executes the query on the source database. Cheers Sithu ======================================= Mapping parameter represents a constant value defined before mapping run. Mapping reusability can be achieved by using mapping parameters. file:///C|/Perl/bin/result.html (19 of 363)4/1/2009 7:50:58 PM
  • 20. file:///C|/Perl/bin/result.html Mapping variable represents a value that can be changed during the mapping run. Mapping variable can be used in incremental loading process. ======================================= 18.Informatica - Can U use the maping parameters or variables created in one maping into another maping? QUESTION #18 NO. We can use mapping parameters or variables in any transformation of the same maping or mapplet in which U have created maping parameters or variables. Click Here to view complete document Submitted by: Ray NO. You might want to use a workflow parameter/variable if you want it to be visible with other mappings/sessions Above answer was rated as good by the following members: ramonasiraj ======================================= NO. You might want to use a workflow parameter/variable if you want it to be visible with other mappings/sessions ======================================= Hi The following sentences extracted from Informatica help as it is.Did it support the above to answers. After you create a parameter you can use it in the Expression Editor of any transformation in a mapping or mapplet. You can also use it in Source Qualifier transformations and reusable transformations. Shivaji Thaneru ======================================= I differ on this; we can use global variable in sessions as well as in mappings. This provision is provided in Informatica 7.1.x versions; I have used it. Please check this in properties. Regards -Vaibhav ======================================= hi file:///C|/Perl/bin/result.html (20 of 363)4/1/2009 7:50:58 PM
  • 21. file:///C|/Perl/bin/result.html Thanx Shivaji but the statement does not completely answer the question. a mapping parameter can be used in reusable transformation but does it mean u can use the mapping parameter whereever the instances of the reusable transformation are used? ======================================= The scope of a mapping variable is the mapping in which it is defined. A variable Var1 defined in mapping Map1 can only be used in Map1. You cannot use it in another mapping say Map2. ======================================= 19.Informatica - Can u use the maping parameters or variables created in one maping into any other reusable transform QUESTION #19 Yes.Because reusable tranformation is not contained with any maplet or maping. Click Here to view complete document No best answer available. Please pick the good answer available or submit your answer. February 02, 2007 17:06:04 #1 mahesh4346 Member Since: January 2007 Contribution: 6 RE: Can u use the maping parameters or variables creat... ======================================= But when one cant use Mapping parameters and variables of one mapping in another Mapping then how can that be used in reusable transformation when Reusable transformations themselves can be used among multiple mappings?So I think one cant use Mapping parameters and variables in reusable transformationsPlease correct me if i am wrong ======================================= Hi you can use the mapping parameters or variables in a reusable transformation. And when you use the xformation in a mapping during execution of the session it validates if the mapping parameter that is used in the xformation is defined with this mapping or not. If not the session fails. ======================================= 20.Informatica - How can U improve session performance in aggregator transformation? QUESTION #20 Use sorted input. file:///C|/Perl/bin/result.html (21 of 363)4/1/2009 7:50:58 PM
  • 22. file:///C|/Perl/bin/result.html Click Here to view complete document No best answer available. Please pick the good answer available or submit your answer. September 12, 2005 12:34:09 #1 Praveen Vasudev RE: ======================================= use sorted input: 1. use a sorter before the aggregator 2. donot forget to check the option on the aggregator that tell the aggregator that the input is sorted on the same keys as group by. the key order is also very important. ======================================= hi You can use the following guidelines to optimize the performance of an Aggregator transformation. Use sorted input to decrease the use of aggregate caches. Sorted input reduces the amount of data cached during the session and improves session performance. Use this option with the Sorter transformation to pass sorted data to the Aggregator transformation. Limit connected input/output or output ports. Limit the number of connected input/output or output ports to reduce the amount of data the Aggregator transformation stores in the data cache. Filter before aggregating. file:///C|/Perl/bin/result.html (22 of 363)4/1/2009 7:50:58 PM
  • 23. file:///C|/Perl/bin/result.html If you use a Filter transformation in the mapping place the transformation before the Aggregator transformation to reduce unnecessary aggregation. Shivaji T ======================================= Following are the 3 ways with which we can improve the session performance:- a) Use sorted input to decrease the use of aggregate caches. b) Limit connected input/output or output ports c) Filter before aggregating (if you are using any filter condition) ======================================= By using Incrimental aggrigation also we can improve performence.Becaue it passes the new data to the mapping and uses historical data to perform aggrigation ======================================= to improve session performance in aggregator transformation enable the session option Incremental Aggregation ======================================= -Use sorted input to decrease the use of aggregate caches. -Limit connected input/output or output ports. Limit the number of connected input/output or output ports to reduce the amount of data the Aggregator transformation stores in the data cache. -Filter the data before aggregating it. ======================================= 21.Informatica - What is aggregate cache in aggregator transforamtion? QUESTION #21 The aggregator stores data in the aggregate cache until it completes aggregate calculations.When u run a session that uses an aggregator transformation,the informatica server creates index and data caches in memory to process the transformation.If the informatica server requires more space,it stores overflow values in cache files. Click Here to view complete document No best answer available. Please pick the good answer available or submit your answer. file:///C|/Perl/bin/result.html (23 of 363)4/1/2009 7:50:58 PM
  • 24. file:///C|/Perl/bin/result.html January 19, 2006 05:00:00 #1 sithusithu Member Since: December 2005 Contribution: 161 RE: What is aggregate cache in aggregator transforamti... ======================================= When you run a workflow that uses an Aggregator transformation the Informatica Server creates index and data caches in memory to process the transformation. If the Informatica Server requires more space it stores overflow values in cache files. Cheers Sithu ======================================= Aggregate cache contains data values while aggregate calculations are being performed. Aggregate cache is made up of index cache and data cache. Index cache contains group values and data cache consists of row values. ======================================= when server runs the session with aggregate transformation it stores data in memory until it completes the aggregation when u partition a source the server creates one memory cache and one disk cache for each partition .it routes the data from one partition to another based on group key values of the transformation ======================================= 22.Informatica - What r the diffrence between joiner transformation and source qualifier transformation? QUESTION #22 U can join hetrogenious data sources in joiner transformation which we can not achieve in source qualifier transformation. U need matching keys to join two relational sources in source qualifier transformation.Where as u doesn’t need matching keys to join two sources. Two relational sources should come from same datasource in sourcequalifier.U can join relatinal sources which r coming from diffrent sources also. Click Here to view complete document No best answer available. Please pick the good answer available or submit your answer. file:///C|/Perl/bin/result.html (24 of 363)4/1/2009 7:50:58 PM
  • 25. file:///C|/Perl/bin/result.html January 27, 2006 01:45:56 #1 sithusithu Member Since: December 2005 Contribution: 161 RE: What r the diffrence between joiner transformation... ======================================= Source qualifier Homogeneous source Joiner Heterogeneous source Cheers Sithu ======================================= Hi The Source Qualifier transformation provides an alternate way to filter rows. Rather than filtering rows from within a mapping the Source Qualifier transformation filters rows when read from a source. The main difference is that the source qualifier limits the row set extracted from a source while the Filter transformation limits the row set sent to a target. Since a source qualifier reduces the number of rows used throughout the mapping it provides better performance. However the Source Qualifier transformation only lets you filter rows from relational sources while the Filter transformation filters rows from any type of source. Also note that since it runs in the database you must make sure that the filter condition in the Source Qualifier transformation only uses standard SQL. Shivaji Thaneru ======================================= hi as per my knowledge you need matching keys to join two relational sources both in Source qualifier as well as in Joiner transformation. But the difference is that in Source qualifier both the keys must have primary key - foreign key relation Whereas in Joiner transformation its not needed. file:///C|/Perl/bin/result.html (25 of 363)4/1/2009 7:50:58 PM
  • 26. file:///C|/Perl/bin/result.html ======================================= source qualifier is used for reading the data from the database where as joiner transformation is used for joining two data tables. source qualifier can also be used to join two tables but the condition is that both the table should be from relational database and it should have the primary key with the same data structure. using joiner we can join data from two heterogeneous sources like two flat files or one file from relational and one file from flat. ======================================= 23.Informatica - In which condtions we can not use joiner transformation(Limitaions of joiner transformation)? QUESTION #23 Both pipelines begin with the same original data source. Both input pipelines originate from the same Source Qualifier transformation. Both input pipelines originate from the same Normalizer transformation. Both input pipelines originate from the same Joiner transformation. Either input pipelines contains an Update Strategy transformation. Either input pipelines contains a connected or unconnected Sequence Generator transformation. Click Here to view complete document No best answer available. Please pick the good answer available or submit your answer. January 25, 2006 12:18:35 #1 Surendra RE: In which condtions we can not use joiner transform... ======================================= This is no longer valid in version 7.2 Now we can use a joiner even if the data is coming from the same source. SK ======================================= You cannot use a Joiner transformation in the following situations(according to infa 7.1): file:///C|/Perl/bin/result.html (26 of 363)4/1/2009 7:50:58 PM
  • 27. file:///C|/Perl/bin/result.html ©Either input pipeline contains an Update Strategy transformation. ©You connect a Sequence Generator transformation directly before the Joiner transformation. ======================================= I don't understand the second one which says we have a sequence generator? Please can you explain on that one? ======================================= Can you please let me know the correct and clear answer for Limitations of joiner transformation? swapna ======================================= You cannot use a Joiner transformation in the following situations(according to infa 7.1): When You connect a Sequence Generator transformation directly before the Joiner transformation. for more information check out the informatica manual7.1 ======================================= What about join conditions. Can we have a ! condition in joiner. ======================================= No in a joiner transformation you can only use an equal to ( ) as a join condition Any other sort of comparison operators is not allowed > < ! <> etc are not allowed as a join condition Utsav ======================================= Yes joiner only supports equality condition The Joiner transformation does not match null values. For example if both EMP_ID1 and EMP_ID2 from the example above contain a row with a null value the PowerCenter Server does not consider them a match and does not join the two rows. To join rows with null values you can replace null input with default values and then join on the default values. ======================================= We cannot use joiner transformation in the following two conditions:- 1. When our data comes through Update Strategy transformation or in other words after Update strategy we cannot add joiner transformation 2. We cannot connect a Sequence Generator transformation directly before the Joiner transformation. ======================================= 24.Informatica - what r the settiings that u use to cofigure the joiner transformation? file:///C|/Perl/bin/result.html (27 of 363)4/1/2009 7:50:58 PM
  • 28. file:///C|/Perl/bin/result.html QUESTION #24 Master and detail source Type of join Condition of the join Click Here to view complete document Submitted by: sithusithu l Master and detail source l Type of join l Condition of the join the Joiner transformation supports the following join types, which you set in the Properties tab: l Normal (Default) l Master Outer l Detail Outer l Full Outer Cheers, Sithu Above answer was rated as good by the following members: vivek1708 ======================================= l Master and detail source l Type of join l Condition of the join the Joiner transformation supports the following join types which you set in the Properties tab: l Normal (Default) l Master Outer l Detail Outer l Full Outer file:///C|/Perl/bin/result.html (28 of 363)4/1/2009 7:50:58 PM
  • 29. file:///C|/Perl/bin/result.html Cheers Sithu ======================================= There are number of properties that you use to configure a joiner transformation are: 1) CASE SENSITIVE STRING COMPARISON: To join the string based on the case sensitive basis. 2) WORKING DIRECTORY: Where to create the caches. 3) JOIN CONDITION : Like join on a.s v.n 4) JOIN TYPE: (Normal or master or detail or full) 5) NULL ORDERING IN MASTER 6) NULL ORDERING IN DETAIL 7) TRACING LEVEL: Level of detail about the operations. 8) INDEX CACHE: Store group value of the input if any. 9) DATA CACHE: Store value of each row of data. 10) SORTED INPUT: Check box will be there and will have to check it if the input to the joiner is sorted. file:///C|/Perl/bin/result.html (29 of 363)4/1/2009 7:50:58 PM
  • 30. file:///C|/Perl/bin/result.html 11) TRANSFORMATION SCOPE: The data to taken into consideration. (transcation or all input) transaction if it depends only on the processed rows while look up if it depends on other data when it processes a row. Ex-joiner using the same source in the pipeline so data is within the scope so transaction. using a lookup so it depends on other data or if dynamic cache is enabled it has to process on the other incoming data so will have to go for all input. ======================================= 25.Informatica - What r the join types in joiner transformation? QUESTION #25 Normal (Default) Master outer Detail outer Full outer Click Here to view complete document No best answer available. Please pick the good answer available or submit your answer. September 12, 2005 12:38:39 #1 Praveen Vasudev RE: ======================================= Normal (Default) -- only matching rows from both master and detail Master outer -- all detail rows and only matching rows from master Detail outer -- all master rows and only matching rows from detail Full outer -- all rows from both master and detail ( matching or non matching) ======================================= follw this 1. In the Mapping Designer choose Transformation-Create. Select the Joiner transformation. Enter a name click OK. The naming convention for Joiner transformations is JNR_TransformationName. Enter a description for the transformation. This description appears in the Repository Manager making it easier for you or others to understand or remember what the transformation does. file:///C|/Perl/bin/result.html (30 of 363)4/1/2009 7:50:58 PM
  • 31. file:///C|/Perl/bin/result.html The Designer creates the Joiner transformation. Keep in mind that you cannot use a Sequence Generator or Update Strategy transformation as a source to a Joiner transformation. 1. Drag all the desired input/output ports from the first source into the Joiner transformation. The Designer creates input/output ports for the source fields in the Joiner as detail fields by default. You can edit this property later. 1. Select and drag all the desired input/output ports from the second source into the Joiner transformation. The Designer configures the second set of source fields and master fields by default. 1. Double-click the title bar of the Joiner transformation to open the Edit Transformations dialog box. 1. Select the Ports tab. 1. Click any box in the M column to switch the master/detail relationship for the sources. Change the master/detail relationship if necessary by selecting the master source in the M column. Tip: Designating the source with fewer unique records as master increases performance during a join. 1. Add default values for specific ports as necessary. Certain ports are likely to contain NULL values since the fields in one of the sources may be empty. You can specify a default value if the target database does not handle NULLs. 1. Select the Condition tab and set the condition. 1. Click the Add button to add a condition. You can add multiple conditions. The master and detail ports must have matching datatypes. The Joiner transformation only supports equivalent ( ) joins: 10. Select the Properties tab and enter any additional settings for the transformations. 1. Click OK. 1. Choose Repository-Save to save changes to the mapping. Cheers file:///C|/Perl/bin/result.html (31 of 363)4/1/2009 7:50:58 PM
  • 32. file:///C|/Perl/bin/result.html Sithu ======================================= 26.Informatica - What r the joiner caches? QUESTION #26 When a Joiner transformation occurs in a session, the Informatica Server reads all the records from the master source and builds index and data caches based on the master rows. After building the caches, the Joiner transformation reads records from the detail source and perform joins. Click Here to view complete document Submitted by: bneha15 For version 7.x and above : When the PowerCenter Server processes a Joiner transformation, it reads rows from both sources concurrently and builds the index and data cache based on the master rows. The PowerCenter Server then performs the join based on the detail source data and the cache data. To improve performance for an unsorted Joiner transformation, use the source with fewer rows as the master source. To improve performance for a sorted Joiner transformation, use the source with fewer duplicate key values as the master. Above answer was rated as good by the following members: vivek1708 ======================================= From a performance perspective.always makes the smaller of the two joining tables to be the master. ======================================= Specifies the directory used to cache master records and the index to these records. By default the cached files are created in a directory specified by the server variable $PMCacheDir. If you override the directory make sure the directory exists and contains enough disk space for the cache files. The directory can be a mapped or mounted drive. Cheers Sithu ======================================= For version 7.x and above : When the PowerCenter Server processes a Joiner transformation it reads rows from both sources concurrently and builds the index and data cache based on the master rows. The PowerCenter Server then performs the join based on the detail source data and the cache data. To improve performance for an unsorted Joiner transformation use the source with fewer rows as the master source. To improve file:///C|/Perl/bin/result.html (32 of 363)4/1/2009 7:50:58 PM
  • 33. file:///C|/Perl/bin/result.html performance for a sorted Joiner transformation use the source with fewer duplicate key values as the master. ======================================= 27.Informatica - what is the look up transformation? QUESTION #27 Use lookup transformation in u’r mapping to lookup data in a relational table,view,synonym. Informatica server queries the look up table based on the lookup ports in the transformation.It compares the lookup transformation port values to lookup table column values based on the look up condition. Click Here to view complete document No best answer available. Please pick the good answer available or submit your answer. December 09, 2005 00:06:38 #1 phani RE: what is the look up transformation? ======================================= Using it we can access the data from a relational table which is not a source in the mapping. For Ex:Suppose the source contains only Empno but we want Empname also in the mapping.Then instead of adding another tbl which contains Empname as a source we can Lkp the table and get the Empname in target. ======================================= A lookup is a simple single-level reference structure with no parent/child relationships. Use a lookup when you have a set of reference members that you do not need to organize hierarchically. ======================================= In DecisionStream a lookup is a simple single-level reference structure with no parent/child relationships. Use a lookup when you have a set of reference members that you do not need to organize hierarchically.HTH ======================================= Use a Lookup transformation in your mapping to look up data in a relational table view or synonym. Import a lookup definition from any relational database to which both the Informatica Client and Server can connect. You can use multiple Lookup transformations in a mapping. file:///C|/Perl/bin/result.html (33 of 363)4/1/2009 7:50:58 PM
  • 34. file:///C|/Perl/bin/result.html Cheers Sithu ======================================= Lookup transformation in a mapping is used to look up data in a flat file or a relational table view or synonym. You can import a lookup definition from any flat file or relational database to which both the PowerCenter Client and Server can connect. You can use multiple Lookup transformations in a mapping. I hope this would be helpful for you. Cheers Sridhar ======================================= 28.Informatica - Why use the lookup transformation ? QUESTION #28 To perform the following tasks. Get a related value. For example, if your source table includes employee ID, but you want to include the employee name in your target table to make your summary data easier to read. Perform a calculation. Many normalized tables include values used in a calculation, such as gross sales per invoice or sales tax, but not the calculated value (such as net sales). Update slowly changing dimension tables. You can use a Lookup transformation to determine whether records already exist in the target. Click Here to view complete document No best answer available. Please pick the good answer available or submit your answer. August 21, 2006 22:26:47 #1 samba RE: Why use the lookup transformation ? file:///C|/Perl/bin/result.html (34 of 363)4/1/2009 7:50:58 PM
  • 35. file:///C|/Perl/bin/result.html ======================================= Lookup table is nothing but the lookup on a table view synonym and falt file. by using lookup we can get a related value with join conditon and performs caluclations two types of lookups are there 1)connected 2)Unconnected connected lookup is with in pipeline only but unconnected lookup is not connected to pipeline unconneceted lookup returns single column value only let me know if you want to know any additions information cheers samba ======================================= hey with regard to look up is there a dynamic look up and static look up ? if so how do you set it. and is there a combinationation of dynamic connected lookups..and static unconnected look ups.? ======================================= look up has two types Connected and Unconnected..Usually we use look-up so as to get the related Value from a table ..it has Input port Output Port Lookup Port and Return Port..where Lookup Port looks up the corresponding column for the value and resturn port returns the value..usually we use when there are no columns in common ======================================= For maintaining the slowly changing diamensions ======================================= Hi The ans to ur question is Yes There are 2 types of lookups: Dynamic and Normal (which you have termed as Static). To configure just double click on the lookup transformation and go to properties tab file:///C|/Perl/bin/result.html (35 of 363)4/1/2009 7:50:58 PM
  • 36. file:///C|/Perl/bin/result.html There'll be an option - dynamic lookup cache. select that... If u dont select this option then the lookup is merely a normal lookup. Please let me know if there are any questions. Thanks. ======================================= 29.Informatica - What r the types of lookup? QUESTION #29 Connected and unconnected Click Here to view complete document No best answer available. Please pick the good answer available or submit your answer. November 08, 2005 18:44:53 #1 swati RE: What r the types of lookup? ======================================= i>connected ii>unconnected iii>cached iv>uncached ======================================= 1. Connected lookup 2. Unconnected lookup 1. file:///C|/Perl/bin/result.html (36 of 363)4/1/2009 7:50:58 PM
  • 37. file:///C|/Perl/bin/result.html Persistent cache 2. Re-cache from database 3. Static cache 4. Dynamic cache 5. Shared cache Cheers Sithu ======================================= hello boss/madam only two types of lookup are there they are: 1) Connected lookup 2) Unconnected lookup. I don't understand why people are specifying the cache types I want to know that now a days caches are also taken into this category of lookup. If yes do specify on the answer list thankyou ======================================= 30.Informatica - Differences between connected and unconnected lookup? QUESTION #30 Connected lookup Unconnected lookup Receives input values diectly from the pipe line. Receives input values from the result of a lkp expression in a another transformation. U can use a dynamic or static cache U can use a static cache. Cache includes all lookup columns used in the maping Cache includes all lookup out put ports in the lookup condition and the lookup/return port. Support user defined default values Does not support user defiend default values file:///C|/Perl/bin/result.html (37 of 363)4/1/2009 7:50:58 PM
  • 38. file:///C|/Perl/bin/result.html Click Here to view complete document No best answer available. Please pick the good answer available or submit your answer. February 03, 2006 03:25:15 #1 Prasanna RE: Differences between connected and unconnected look... ======================================= In addition: Connected Lookip can return/pass multiple rows/groups of data whereas unconnected can return only one port. ======================================= In addition to this: In Connected lookup if the condition is not satisfied it returns '0'. In UnConnected lookup if the condition is not satisfied it returns 'NULL'. ======================================= Hi Differences Between Connected and Unconnected Lookups Connected Lookup Unconnected Lookup Receives input values directly from the pipeline. Receives input values from the result of a :LKP expression in another transformation. You can use a dynamic or static cache. You can use a static cache. Cache includes all lookup columns used in the mapping (that is lookup source columns included in the lookup condition and lookup source columns linked as output ports to other transformations). Cache includes all lookup/output ports in the lookup condition and the lookup/return port. Can return multiple columns from the same row or insert into the dynamic lookup cache. Designate one return port (R). Returns one column from each row. file:///C|/Perl/bin/result.html (38 of 363)4/1/2009 7:50:58 PM
  • 39. file:///C|/Perl/bin/result.html If there is no match for the lookup condition the PowerCenter Server returns the default value for all output ports. If you configure dynamic caching the PowerCenter Server inserts rows into the cache or leaves it unchanged. If there is no match for the lookup condition the PowerCenter Server returns NULL. If there is a match for the lookup condition the PowerCenter Server returns the result of the lookup condition for all lookup/output ports. If you configure dynamic caching the PowerCenter Server either updates the row the in the cache or leaves the row unchanged. If there is a match for the lookup condition the PowerCenter Server returns the result of the lookup condition into the return port. Pass multiple output values to another transformation. Link lookup/output ports to another transformation. Pass one output value to another transformation. The lookup/output/return port passes the value to the transformation calling : LKP expression. Supports user-defined default values. Does not support user-defined default values. Shivaji Thaneru ======================================= 31.Informatica - what is meant by lookup caches? QUESTION #31 The informatica server builds a cache in memory when it processes the first row af a data in a cached look up transformation.It allocates memory for the cache based on the amount u configure in the transformation or session properties.The informatica server stores condition values in the index cache and output values in the data cache. Click Here to view complete document No best answer available. Please pick the good answer available or submit your answer. September 28, 2006 06:34:33 #1 srinivas vadlakonda RE: what is meant by lookup caches? file:///C|/Perl/bin/result.html (39 of 363)4/1/2009 7:50:58 PM
  • 40. file:///C|/Perl/bin/result.html ======================================= lookup cache is the temporary memory that is created by the informatica server to hold the lookup data and to perform the lookup conditions ======================================= A LookUP Cache is a Temporary Memory Area which is Created by the Informatica Server. which stores the Lookup data based on certain Conditions. The Caches are of Three types 1) Persistent 2) Dynamic 3) Static and 4) Shared Cache. ======================================= 32.Informatica - What r the types of lookup caches? QUESTION #32 Persistent cache: U can save the lookup cache files and reuse them the next time the informatica server processes a lookup transformation configured to use the cache. Recache from database: If the persistent cache is not synchronized with he lookup table,U can configure the lookup transformation to rebuild the lookup cache. Static cache: U can configure a static or readonly cache for only lookup table.By default informatica server creates a static cache.It caches the lookup table and lookup values in the cache for each row that comes into the transformation.when the lookup condition is true,the informatica server does not update the cache while it prosesses the lookup transformation. Dynamic cache: If u want to cache the target table and insert new rows into cache and the target,u can create a look up transformation to use dynamic cache. The informatica server dynamically inerts data to the target table. shared cache: U can share the lookup cache between multiple transactions.U can share unnamed cache between transformations inthe same maping. Click Here to view complete document No best answer available. Please pick the good answer available or submit your answer. December 13, 2005 06:02:36 #1 Sithu file:///C|/Perl/bin/result.html (40 of 363)4/1/2009 7:50:58 PM
  • 41. file:///C|/Perl/bin/result.html RE: What r the types of lookup caches? ======================================= Cache 1. Static cache 2. Dynamic cache 3. Persistent cache Sithu ======================================= Cache are three types namely Dynamic cache Static cache Persistent cache Cheers Sithu ======================================= Dynamic cache Persistence Cache Re cache Shared Cache ======================================= hi could any one get me information where you would use these caches for look ups and how do you set them. file:///C|/Perl/bin/result.html (41 of 363)4/1/2009 7:50:58 PM
  • 42. file:///C|/Perl/bin/result.html thanks infoseeker ======================================= There are 4 types of lookup cache - Persistent Recache Satic & Dynamic. Bye Stephen ======================================= Types of Caches are : 1) Dynamic Cache 2) Static Cache 3) Persistent Cache 4) Shared Cache 5) Unshared Cache ======================================= There are five types of caches such as static cache dynamic cache persistant cache shared cache etc... ======================================= 33.Informatica - Difference between static cache and dynamic cache QUESTION #33 Static cache Dynamic cache U can not insert or update the cache U can insert rows into the cache as u pass to the target The informatic server returns a value from the lookup table or cache when the condition is true.When the condition is not true, informatica server returns the default value for connected transformations and null for unconnected transformations. The informatic server inserts rows into cache when the condition is false. This indicates that the the row is not in the cache or target table. U can pass these rows to the target table Click Here to view complete document Submitted by: vp file:///C|/Perl/bin/result.html (42 of 363)4/1/2009 7:50:58 PM
  • 43. file:///C|/Perl/bin/result.html lets say for example your lookup table is your target table. So when you create the Lookup selecting the dynamic cache what It does is it will lookup values and if there is no match it will insert the row in both the target and the lookup cache (hence the word dynamic cache it builds up as you go along), or if there is a match it will update the row in the target. On the other hand Static caches dont get updated when you do a lookup. Above answer was rated as good by the following members: ssangi, ananthece ======================================= lets say for example your lookup table is your target table. So when you create the Lookup selecting the dynamic cache what It does is it will lookup values and if there is no match it will insert the row in both the target and the lookup cache (hence the word dynamic cache it builds up as you go along) or if there is a match it will update the row in the target. On the other hand Static caches dont get updated when you do a lookup. ======================================= 34.Informatica - Which transformation should we use to normalize the COBOL and relational sources? QUESTION #34 Normalizer Transformation. When U drag the COBOL source in to the mapping Designer workspace,the normalizer transformation automatically appears,creating input and output ports for every column in the source. Click Here to view complete document No best answer available. Please pick the good answer available or submit your answer. January 19, 2006 01:08:06 #1 sithusithu Member Since: December 2005 Contribution: 161 RE: Which transformation should we use to normalize th... file:///C|/Perl/bin/result.html (43 of 363)4/1/2009 7:50:58 PM
  • 44. file:///C|/Perl/bin/result.html ======================================= The Normalizer transformation normalizes records from COBOL and relational sources allowing you to organize the data according to your own needs. A Normalizer transformation can appear anywhere in a data flow when you normalize a relational source. Use a Normalizer transformation instead of the Source Qualifier transformation when you normalize a COBOL source. When you drag a COBOL source into the Mapping Designer workspace the Normalizer transformation automatically appears creating input and output ports for every column in the source Cheers Sithu ======================================= 35.Informatica - How the informatica server sorts the string values in Ranktransformation? QUESTION #35 When the informatica server runs in the ASCII data movement mode it sorts session data using Binary sortorder.If U configure the seeion to use a binary sort order,the informatica server caluculates the binary value of each string and returns the specified number of rows with the higest binary values for the string. Click Here to view complete document No best answer available. Please pick the good answer available or submit your answer. December 09, 2005 00:25:27 #1 phani RE: How the informatica server sorts the string values... file:///C|/Perl/bin/result.html (44 of 363)4/1/2009 7:50:58 PM
  • 45. file:///C|/Perl/bin/result.html ======================================= When Informatica Server runs in UNICODE data movement mode then it uses the sort order configured in session properties. ======================================= 36.Informatica - What is the Rankindex in Ranktransformation? QUESTION #36 The Designer automatically creates a RANKINDEX port for each Rank transformation. The Informatica Server uses the Rank Index port to store the ranking position for each record in a group. For example, if you create a Rank transformation that ranks the top 5 salespersons for each quarter, the rank index numbers the salespeople from 1 to 5: Click Here to view complete document No best answer available. Please pick the good answer available or submit your answer. January 12, 2006 04:41:57 #1 sithusithu Member Since: December 2005 Contribution: 161 RE: What is the Rankindex in Ranktransformation? ======================================= Based on which port you want generate Rank is known as rank port the generated values are known as rank index. Cheers Sithu ======================================= 37.Informatica - What is the Router transformation? QUESTION #37 A Router transformation is similar to a Filter transformation because both transformations allow you to use a condition to test data. However, a Filter transformation tests data for one condition and drops the rows of data that do not meet the condition. A Router transformation tests data for one or more conditions and gives you the option to route rows of data that do not meet any of the conditions to a default output group. file:///C|/Perl/bin/result.html (45 of 363)4/1/2009 7:50:58 PM
  • 46. file:///C|/Perl/bin/result.html If you need to test the same input data based on multiple conditions, use a Router Transformation in a mapping instead of creating multiple Filter transformations to perform the same task. Click Here to view complete document No best answer available. Please pick the good answer available or submit your answer. January 19, 2006 04:46:42 #1 sithusithu Member Since: December 2005 Contribution: 161 RE: What is the Router transformation? ======================================= A Router transformation is similar to a Filter transformation because both transformations allow you to use a condition to test data. A Filter transformation tests data for one condition and drops the rows of data that do not meet the condition. However a Router transformation tests data for one or more conditions and gives you the option to route rows of data that do not meet any of the conditions to a default output group. Cheers Sithu ======================================= Note:- i think the definition and purpose of Router transformation define by sithusithu sithu is not clear and not fully correct as they of have mentioned <A Router transformation tests data for one or more conditions > sorry sithu and sithusithu but i want to clarify is that in Filter transformation also we can give so many conditions together. eg. empno 1234 and sal>25000 (2conditions) Actual Purposes of Router Transformation are:- 1. Similar as filter transformation to sort the source data according to the condition applied. 2. When we want to load data into different target tables from same source but with different condition as per target tables requirement. file:///C|/Perl/bin/result.html (46 of 363)4/1/2009 7:50:58 PM
  • 47. file:///C|/Perl/bin/result.html e.g. From emp table we want to load data in three(3) different target tables T1(where deptno 10) T2 (where deptno 20) and T3(where deptno 30). For this if we use filter transformation we need three(3) filter transformations So instead of using three(3) filter transformation we will use only one(1) Router transformation. Advantages:- 1. Better Performance because in mapping the Router transformation Informatica server processes the input data only once instead of three as in filter transformation. 2. Less complexity because we use only one router transformation instead of multiple filter transformation. Router Transformation is :- Active and Connected. ======================================= 38.Informatica - What r the types of groups in Router transformation? QUESTION #38 Input group Output group The designer copies property information from the input ports of the input group to create a set of output ports for each output group. Two types of output groups User defined groups Default group U can not modify or delete default groups. Click Here to view complete document No best answer available. Please pick the good answer available or submit your answer. December 09, 2005 00:35:44 #1 phani RE: What r the types of groups in Router transformatio... file:///C|/Perl/bin/result.html (47 of 363)4/1/2009 7:50:58 PM
  • 48. file:///C|/Perl/bin/result.html ======================================= Input group contains the data which is coming from the source.We can create as many user-defined groups as required for each condition we want to specify.Default group contains all the rows of data that doesn't satisfy the condition of any group. ======================================= A Router transformation has the following types of groups: l Input l Output Input Group The Designer copies property information from the input ports of the input group to create a set of output ports for each output group. Output Groups There are two types of output groups: l User-defined groups l Default group You cannot modify or delete output ports or their properties. Cheers Sithu ======================================= 39.Informatica - Why we use stored procedure transformation? QUESTION #39 For populating and maintaining data bases. Click Here to view complete document No best answer available. Please pick the good answer available or submit your answer. file:///C|/Perl/bin/result.html (48 of 363)4/1/2009 7:50:58 PM
  • 49. file:///C|/Perl/bin/result.html January 19, 2006 04:41:34 #1 sithusithu Member Since: December 2005 Contribution: 161 RE: Why we use stored procedure transformation? ======================================= A Stored Procedure transformation is an important tool for populating and maintaining databases. Database administrators create stored procedures to automate time-consuming tasks that are too complicated for standard SQL statements. Cheers Sithu ======================================= You might use stored procedures to do the following tasks: l Check the status of a target database before loading data into it. l Determine if enough space exists in a database. l Perform a specialized calculation. l Drop and recreate indexes. Shivaji Thaneru ======================================= You might use stored procedures to do the following tasks: l Check the status of a target database before loading data into it. l Determine if enough space exists in a database. l Perform a specialized calculation. l Drop and recreate indexes. Shivaji Thaneru ======================================= we use a stored procedure transformation to execute a stored procedure which in turn might do the file:///C|/Perl/bin/result.html (49 of 363)4/1/2009 7:50:58 PM
  • 50. file:///C|/Perl/bin/result.html above things in a database and more. ======================================= can you give me a real time scenario please? ======================================= 40.Informatica - What is source qualifier transformation? QUESTION #40 When U add a relational or a flat file source definition to a maping,U need to connect it to a source qualifer transformation.The source qualifier transformation represnets the records that the informatica server reads when it runs a session. Click Here to view complete document Submitted by: Rama Rao B. Source qualifier is also a table, it acts as an intermediator in between source and target metadata. And, it also generates sql, which creating mapping in between source and target metadatas. Thanks, Rama Rao Above answer was rated as good by the following members: him.life ======================================= When you add a relational or a flat file source definition to a mapping you need to connect it to a Source Qualifier transformation. The Source Qualifier represents the rows that the Informatica Server reads when it executes a session. l Join data originating from the same source database. You can join two or more tables with primary-foreign key relationships by linking the sources to one Source Qualifier. l Filter records when the Informatica Server reads source data. If you include a filter condition the Informatica Server adds a WHERE clause to the default query. l Specify an outer join rather than the default inner join. If you include a user-defined join the Informatica Server replaces the join information specified by the metadata in the SQL query. l Specify sorted ports. If you specify a number for sorted ports the Informatica Server adds an ORDER BY clause to the default SQL query. l Select only distinct values from the source. If you choose Select Distinct the Informatica Server adds a SELECT DISTINCT statement to the default SQL query. file:///C|/Perl/bin/result.html (50 of 363)4/1/2009 7:50:58 PM
  • 51. file:///C|/Perl/bin/result.html l Create a custom query to issue a special SELECT statement for the Informatica Server to read source data. For example you might use a custom query to perform aggregate calculations or execute a stored procedure. Cheers Sithu ======================================= When you add a relational or a flat file source definition to a mapping you need to connect it to a Source Qualifier transformation. The Source Qualifier represents the rows that the Informatica Server reads when it executes a session. Cheers Sithu ======================================= Source qualifier is also a table it acts as an intermediator in between source and target metadata. And it also generates sql which creating mapping in between source and target metadatas. Thanks Rama Rao ======================================= Def:- The Transformation which Converts the source(relational or flat) datatype to Informatica datatype. So it works as an intemediator between and source and informatica server. Tasks performed by qualifier transformation:- 1. Join data originating from the same source database. 2. Filter records when the Informatica Server reads source data. 3. Specify an outer join rather than the default inner join. 4. Specify sorted ports. 5. Select only distinct values from the source. 6. Create a custom query to issue a special SELECT statement for the Informatica Server to read source data. ======================================= file:///C|/Perl/bin/result.html (51 of 363)4/1/2009 7:50:58 PM
  • 52. file:///C|/Perl/bin/result.html Source Qualifier Transformation is the beginning of the pipeline of any transformation the main purpose of this transformation is that it is reading the data from the relational or flat file and is passing the data ie. read into the mapping designed so that the data can be passed into the other transformations ======================================= Source Qualifier is a transformation with every source definiton if the source is Relational Database. Source Qualifier fires a Select statement on the source db. With every Source Definition u will get a source qualifier without Source qualifier u r mapping will be invalid and u cannot define the pipeline to the other instance. If the source is Cobol then for that source definition u will get a normalizer transormation not the Source Qualifier. ======================================= 41.Informatica - What r the tasks that source qualifier performs? QUESTION #41 Join data originating from same source data base. Filter records when the informatica server reads source data. Specify an outer join rather than the default inner join specify sorted records. Select only distinct values from the source. Creating custom query to issue a special SELECT statement for the informatica server to read source data. Click Here to view complete document No best answer available. Please pick the good answer available or submit your answer. January 24, 2006 03:42:08 #1 sithusithu Member Since: December 2005 Contribution: 161 RE: What r the tasks that source qualifier performs? file:///C|/Perl/bin/result.html (52 of 363)4/1/2009 7:50:58 PM
  • 53. file:///C|/Perl/bin/result.html ======================================= l Join data originating from the same source database. You can join two or more tables with primary-foreign key relationships by linking the sources to one Source Qualifier. l Filter records when the Informatica Server reads source data. If you include a filter condition the Informatica Server adds a WHERE clause to the default query. l Specify an outer join rather than the default inner join. If you include a user-defined join the Informatica Server replaces the join information specified by the metadata in the SQL query. l Specify sorted ports. If you specify a number for sorted ports the Informatica Server adds an ORDER BY clause to the default SQL query. l Select only distinct values from the source. If you choose Select Distinct the Informatica Server adds a SELECT DISTINCT statement to the default SQL query. l Create a custom query to issue a special SELECT statement for the Informatica Server to read source data. For example you might use a custom query to perform aggregate calculations or execute a stored procedure. Cheers Sithu ======================================= 42.Informatica - What is the target load order? QUESTION #42 U specify the target loadorder based on source qualifiers in a maping.If u have the multiple source qualifiers connected to the multiple targets,U can designatethe order in which informatica server loads data into the targets. Click Here to view complete document No best answer available. Please pick the good answer available or submit your answer. March 01, 2006 14:27:34 #1 saritha RE: What is the target load order? file:///C|/Perl/bin/result.html (53 of 363)4/1/2009 7:50:58 PM
  • 54. file:///C|/Perl/bin/result.html ======================================= A target load order group is the collection of source qualifiers transformations and targets linked together in a mapping. ======================================= 43.Informatica - What is the default join that source qualifier provides? QUESTION #43 Inner equi join. Click Here to view complete document No best answer available. Please pick the good answer available or submit your answer. January 24, 2006 03:40:28 #1 sithusithu Member Since: December 2005 Contribution: 161 RE: What is the default join that source qualifier pro... ======================================= The Joiner transformation supports the following join types which you set in the Properties tab: l Normal (Default) l Master Outer l Detail Outer l Full Outer Cheers Sithu ======================================= Equijoin on a key common to the sources drawn by the SQ. ======================================= 44.Informatica - What r the basic needs to join two sources in a file:///C|/Perl/bin/result.html (54 of 363)4/1/2009 7:50:58 PM
  • 55. file:///C|/Perl/bin/result.html source qualifier? QUESTION #44 Two sources should have primary and Foreign key relation ships. Two sources should have matching data types. Click Here to view complete document No best answer available. Please pick the good answer available or submit your answer. December 14, 2005 10:32:44 #1 rishi RE: What r the basic needs to join two sources in a so... ======================================= The both the table should have a common feild with same datatype. Its not neccessary both should follow primary and foreign relationship. If any relation ship exists that will help u in performance point of view. ======================================= Also of you are using a lookup in your mapping and the lookup table is small then try to join that looup in Source Qualifier to improve perf. Regards SK ======================================= Both the sources must be from same database. ======================================= 45.Informatica - what is update strategy transformation ? QUESTION #45 This transformation is used to maintain the history data or just most recent changes in to target table. file:///C|/Perl/bin/result.html (55 of 363)4/1/2009 7:50:58 PM
  • 56. file:///C|/Perl/bin/result.html Click Here to view complete document No best answer available. Please pick the good answer available or submit your answer. January 19, 2006 04:33:23 #1 sithusithu Member Since: December 2005 Contribution: 161 RE: what is update strategy transformation ? ======================================= The model you choose constitutes your update strategy how to handle changes to existing rows. In PowerCenter and PowerMart you set your update strategy at two different levels: l Within a session. When you configure a session you can instruct the Informatica Server to either treat all rows in the same way (for example treat all rows as inserts) or use instructions coded into the session mapping to flag rows for different database operations. l Within a mapping. Within a mapping you use the Update Strategy transformation to flag rows for insert delete update or reject. Chrees Sithu ======================================= Update strategy transformation is used for flagging the records for insert update delete and reject In Informatica power center u can develop update strategy at two levels use update strategy T/R in the mapping design target table options in the session the following are the target table options Insert file:///C|/Perl/bin/result.html (56 of 363)4/1/2009 7:50:58 PM
  • 57. file:///C|/Perl/bin/result.html Update Delete Update as Insert Update else Insert Thanks Rekha ======================================= 46.Informatica - What is the default source option for update stratgey transformation? QUESTION #46 Data driven. Click Here to view complete document No best answer available. Please pick the good answer available or submit your answer. March 28, 2006 05:03:53 #1 Gyaneshwar RE: What is the default source option for update strat... ======================================= DATA DRIVEN ======================================= 47.Informatica - What is Datadriven? QUESTION #47 The informatica server follows instructions coded into update strategy transformations with in the session maping determine how to flag records for insert, update, delete or reject. If u do not choose data driven option setting,the informatica server ignores all update strategy transformations in the mapping. Click Here to view complete document file:///C|/Perl/bin/result.html (57 of 363)4/1/2009 7:50:58 PM
  • 58. file:///C|/Perl/bin/result.html No best answer available. Please pick the good answer available or submit your answer. January 19, 2006 04:36:22 #1 sithusithu Member Since: December 2005 Contribution: 161 RE: What is Datadriven? ======================================= The Informatica Server follows instructions coded into Update Strategy transformations within the session mapping to determine how to flag rows for insert delete update or reject. If the mapping for the session contains an Update Strategy transformation this field is marked Data Driven by default. Cheers Sithu ======================================= When Data driven option is selected in session properties it the code will consider the update strategy (DD_UPDATE DD_INSERT DD_DELETE DD_REJECT) used in the mapping and not the options selected in the session properties. ======================================= 48.Informatica - What r the options in the target session of update strategy transsformatioin? QUESTION #48 Insert Delete Update Update as update Update as insert Update esle insert Truncate table Click Here to view complete document No best answer available. Please pick the good answer available or submit your answer. February 03, 2006 03:46:07 #1 Prasanna file:///C|/Perl/bin/result.html (58 of 363)4/1/2009 7:50:58 PM
  • 59. file:///C|/Perl/bin/result.html RE: What r the options in the target session of update... ======================================= Update as Insert: This option specified all the update records from source to be flagged as inserts in the target. In other words instead of updating the records in the target they are inserted as new records. Update else Insert: This option enables informatica to flag the records either for update if they are old or insert if they are new records from source. ======================================= 49.Informatica - What r the types of maping wizards that r to be provided in Informatica? QUESTION #49 The Designer provides two mapping wizards to help you create mappings quickly and easily. Both wizards are designed to create mappings for loading and maintaining star schemas, a series of dimensions related to a central fact table. Getting Started Wizard. Creates mappings to load static fact and dimension tables, as well as slowly growing dimension tables. Slowly Changing Dimensions Wizard. Creates mappings to load slowly changing dimension tables based on the amount of historical dimension data you want to keep and the method you choose to handle historical dimension data. Click Here to view complete document No best answer available. Please pick the good answer available or submit your answer. January 09, 2006 02:43:25 #1 sithusithu Member Since: December 2005 Contribution: 161 RE: What r the types of maping wizards that r to be pr... file:///C|/Perl/bin/result.html (59 of 363)4/1/2009 7:50:58 PM
  • 60. file:///C|/Perl/bin/result.html ======================================= Simple Pass through Slowly Growing Target Slowly Changing the Dimension Type1 Most recent values Type2 Full History Version Flag Date Type3 Current and one previous ======================================= Inf designer : Mapping -> wizards --> 1) Getting started -->Simple pass through mapping -->Slowly growing target 2) slowly changing dimensions---> SCD 1 (only recent values) --->SCD 2(HISTORY using flag or version or time) --->SCD 3(just recent values) one important point is dimensions are 2 types 1)slowly growing targets file:///C|/Perl/bin/result.html (60 of 363)4/1/2009 7:50:58 PM
  • 61. file:///C|/Perl/bin/result.html 2)slowly changing dimensions. ======================================= 50.Informatica - What r the types of maping in Getting Started Wizard? QUESTION #50 Simple Pass through maping : Loads a static fact or dimension table by inserting all rows. Use this mapping when you want to drop all existing data from your table before loading new data. Slowly Growing target : Loads a slowly growing fact or dimension table by inserting new rows. Use this mapping to load new data when existing data does not require updates. Click Here to view complete document No best answer available. Please pick the good answer available or submit your answer. January 09, 2006 02:46:25 #1 sithusithu Member Since: December 2005 Contribution: 161 RE: What r the types of maping in Getting Started Wiza... ======================================= 1. Simple Pass through2. Slowly Growing TargetCheers Sithu ======================================= 51.Informatica - What r the mapings that we use for slowly changing dimension table? QUESTION #51 Type1: Rows containing changes to existing dimensions are updated in the target by overwriting the existing dimension. In the Type 1 Dimension mapping, all rows contain current dimension data. Use the Type 1 Dimension mapping to update a slowly changing dimension table when you do not need to keep any previous versions of dimensions in the table. Type 2: The Type 2 Dimension Data mapping inserts both new and changed dimensions into the target. Changes are tracked in the target table by versioning the primary key and creating a version number for each dimension in the table. Use the Type 2 Dimension/Version Data mapping to update a slowly changing file:///C|/Perl/bin/result.html (61 of 363)4/1/2009 7:50:58 PM
  • 62. file:///C|/Perl/bin/result.html dimension table when you want to keep a full history of dimension data in the table. Version numbers and versioned primary keys track the order of changes to each dimension. Type 3: The Type 3 Dimension mapping filters source rows based on user-defined comparisons and inserts only those found to be new dimensions to the target. Rows containing changes to existing dimensions are updated in the target. When updating an existing dimension, the Informatica Server saves existing data in different columns of the same row and replaces the existing data with the updates Click Here to view complete document No best answer available. Please pick the good answer available or submit your answer. June 03, 2006 09:39:20 #1 mamatha RE: What r the mapings that we use for slowly changing... ======================================= hello sir i want whole information on slowly changing dimension.and also want project on slowly changing dimension in informatica. Thanking you sir mamatha. ======================================= 1.Up Date strategy Transfermation 2.Look up Transfermation. ======================================= file:///C|/Perl/bin/result.html (62 of 363)4/1/2009 7:50:58 PM
  • 63. file:///C|/Perl/bin/result.html Type1: Rows containing changes to existing dimensions are updated in the target by overwriting the existing dimension. In the Type 1 Dimension mapping all rows contain current dimension data. Use the Type 1 Dimension mapping to update a slowly changing dimension table when you do not need to keep any previous versions of dimensions in the table. Type 2: The Type 2 Dimension Data mapping inserts both new and changed dimensions into the target. Changes are tracked in the target table by versioning the primary key and creating a version number for each dimension in the table. Use the Type 2 Dimension/Version Data mapping to update a slowly changing dimension table when you want to keep a full history of dimension data in the table. Version numbers and versioned primary keys track the order of changes to each dimension. Type 3: The Type 3 Dimension mapping filters source rows based on user-defined comparisons and inserts only those found to be new dimensions to the target. Rows containing changes to existing dimensions are updated in the target. ======================================= SCD: Source to SQ - 1 mapping SQ to LKP - 2 mapping SQ_LKP to EXP - 3 Mapping EXP to FTR - 4 Mapping FTR to UPD - 5 Mapping UPD to TGT - 6 Mapping SQGen to TGT - 7 Mapping. I think these are the 7 mapping used for SCD in general; For type 1: The mapping will be doubled that is one for insert and other for update and total as 14. For type 2 : The mapping will be increased thrice one for insert 2nd for update and 3 to keep the old one. (here the history stores) For type three : It will be doubled for insert one row and also to insert one column to keep the previous data. Cheers Prasath ======================================= 52.Informatica - What r the different types of Type2 dimension maping? QUESTION #52 Type2 Dimension/Version Data Maping: In this maping the updated dimension in the source will gets inserted in target along with a new version number.And newly added dimension file:///C|/Perl/bin/result.html (63 of 363)4/1/2009 7:50:58 PM
  • 64. file:///C|/Perl/bin/result.html in source will inserted into target with a primary key. Type2 Dimension/Flag current Maping: This maping is also used for slowly changing dimensions.In addition it creates a flag value for changed or new dimension. Flag indiactes the dimension is new or newlyupdated.Recent dimensions will gets saved with cuurent flag value 1. And updated dimensions r saved with the value 0. Type2 Dimension/Effective Date Range Maping: This is also one flavour of Type2 maping used for slowly changing dimensions.This maping also inserts both new and changed dimensions in to the target.And changes r tracked by the effective date range for each version of each dimension. Click Here to view complete document No best answer available. Please pick the good answer available or submit your answer. January 04, 2006 05:31:39 #1 sithusithu Member Since: December 2005 Contribution: 161 RE: What r the different types of Type2 dimension mapi... ======================================= Type2 1. Version number 2. Flag 3.Date Cheers Sithu ======================================= file:///C|/Perl/bin/result.html (64 of 363)4/1/2009 7:50:58 PM
  • 65. file:///C|/Perl/bin/result.html 53.Informatica - How can u recognise whether or not the newly added rows in the source r gets insert in the target ? QUESTION #53 In the Type2 maping we have three options to recognise the newly added rows Version number Flagvalue Effective date Range Click Here to view complete document No best answer available. Please pick the good answer available or submit your answer. December 14, 2005 10:43:31 #1 rishi RE: How can u recognise whether or not the newly added... ======================================= If it is Type 2 Dimension the abouve answer is fine but if u want to get the info of all the insert statements and Updates you need to use session log file where you configure it to verbose. You will get complete set of data which record was inserted and which was not. ======================================= Just use debugger to know how the data from source moves to target it will show how many new rows get inserted else updated. ======================================= 54.Informatica - What r two types of processes that informatica runs the session? QUESTION #54 Load manager Process: Starts the session, creates the DTM process, and sends post-session email when the session completes. The DTM process. Creates threads to initialize the session, read, write, and transform data, and handle pre- and post-session operations. Click Here to view complete document file:///C|/Perl/bin/result.html (65 of 363)4/1/2009 7:50:58 PM
  • 66. file:///C|/Perl/bin/result.html No best answer available. Please pick the good answer available or submit your answer. September 17, 2007 08:17:02 #1 rasmi Member Since: June 2007 Contribution: 20 RE: What r two types of processes that informatica run... ======================================= When the workflow start to run Then the informatica server process starts Two process:load manager process and DTM Process; The load manager process has the following tasks 1. lock the workflow and read the properties of workflow 2.create workflow log file 3. start the all tasks in workfolw except session and worklet 4.It starts the DTM Process. 5.It will send the post session Email when the DTM abnormally terminated The DTM process involved in the following tasks 1. read session properties 2.create session log file 3.create Threades such as master thread read write transformation threateds 4.send post session Email. 5.run the pre and post shell commands . 6.run the pre and post stored procedures. ======================================= 55.Informatica - Can u generate reports in Informatcia? QUESTION #55 Yes. By using Metadata reporter we can generate reports in informatica. Click Here to view complete document No best answer available. Please pick the good answer available or submit your answer. January 19, 2006 05:05:46 #1 sithusithu Member Since: December 2005 Contribution: 161 file:///C|/Perl/bin/result.html (66 of 363)4/1/2009 7:50:58 PM
  • 67. file:///C|/Perl/bin/result.html RE: Can u generate reports in Informatcia? ======================================= It is a ETL tool you could not make reports from here but you can generate metadata report that is not going to be used for business analysis Cheers Sithu ======================================= can u pls tell me how generate metadata reports? ======================================= 56.Informatica - Define maping and sessions? QUESTION #56 Maping: It is a set of source and target definitions linked by transformation objects that define the rules for transformation. Session : It is a set of instructions that describe how and when to move data from source to targets. Click Here to view complete document No best answer available. Please pick the good answer available or submit your answer. December 04, 2006 15:07:09 #1 Pavani RE: Define maping and sessions? file:///C|/Perl/bin/result.html (67 of 363)4/1/2009 7:50:58 PM