Skip to content

Oracle Golden Gate Interview Questions

Q 1: What is Oracle GoldenGate?

A 1: Oracle GoldenGate 12c is a fancy name for a piece of software which enables the continuous, real-time capture, routing, transformation, and delivery of transactional data across heterogeneous environments. So it transfers committed data changes from oracle/non-oracle database to target oracle/non-oracle databases with low latency.

Q 2: What is Oracle GoldenGate Delivery module?

A 2: The Oracle GoldenGate Delivery module takes the data transactions from the latest Trail File and applies that data to the target using the native SQL for that relational database management system—delivery can be made to any ODBC compliant database. The Delivery module applies each transaction in the same order as it was committed and within the same transactional context as at the source, to ensure consistency and referential integrity at the target.

One key point to note is that the changes can be delivered to Oracle or Non-Oracle databases.

Q 3: What is the significance of Oracle GoldenGate Manager?

A 3: To give users control over Oracle GoldenGate processes, Manager provides a command line interface to perform a variety of administrative, housekeeping, and reporting activities, including:

  • Setting parameters to configure and fine-tune Oracle GoldenGate processes
  • Starting, stopping, and monitoring capture and delivery modules
  • Critical, informational event, and threshold reporting
  • Resource management
  • Trail File management

Q  4: What are the Key Benefits of GoldenGate?

A 4:

A) Transforming and Mapping the data between two databases without any need of middleware software
— Oracle GoldenGate can flexibly accommodate transformations and mappings within either the Capture or Delivery modules—no middle tier server is needed. The product supports table and row filtering based on user-defined criteria.

B) Flexible topology support
Oracle GoldenGate’s architecture allows customers to support a variety of topologies, including one source to one target, one-to-many, many-to-one, many-tomany, and cascading, and bidirectional configurations.

C) Bidirectional configuration support
Oracle GoldenGate enables both active-passive and active-active solutions for maximum system availability.

D) Routing and Compression
Oracle GoldenGate utilizes TCP/IP for sending data so no geographical distance constraints are imposed between source and target systems. In addition, Oracle GoldenGate can apply additional compression to the data as it is routed

E) Data Encryption
Data encryption ensures secure, confidential data transmissions

F) Excellent Support for Operational Reporting Solutions

  • Organizations can split reporting operations to synced systems minimize the overhead on the production environment.
  • Oracle GoldenGate does not limit the type of operations that can be executed on the reporting instance. Reporting needs that include the creation of temporary tables or other processes to prepare data for reporting can be run on the reporting instance without impacting primary business systems.
  • Oracle GoldenGate can be combined with Oracle Data Integrator for complex data transformation.
  • By using Oracle GoldenGate the option exists to include bi-directional replication for key system tables or for entire sets of data. Oracle GoldenGate is flexible to fit the needs of both the application being used and business users.
  • Oracle GoldenGate is tested and certified on major Oracle applications including E-Business Suite, JD Edwards, PeopleSoft and Siebel CRM.
  • Oracle GoldenGate reads changed data from database transaction logs rather than from the database tables themselves. Because it requires minimal modifications to the applications, this solution provides organizations with a compact, non-intrusive, and easily configured method for providing access to real-time data for reporting purposes

Just for sake of distinguishing between Operational and Analytical reporting, below are basic definition.

Operational reports are used for day-to-day business processes and decision making. They are typically based on an active data set that contains the last quarter or two of transactions. The reports contain detailed information for the active accounts and are used by managerial and business stuff to optimally steer daily operations. For operational reporting, the data source can be either E-Business Suite directly or for larger implementations it may come from an Operational Data Store.

On the other hand, analytics reports are based on larger data sets that usually include one to ten or more years of transactional records. Executive staff uses them for strategic decision making and intelligence gathering. Data is typically stored in a data warehouse and is aggregated in a way that enables efficient analysis.

Q 5 : For reporting needs, as compared to GoldenGate solution, how are old ETL-Based Bulk Data Movement less effective?

A 5: Some key points were traditional reporting solutions lack are:

  • Batch windows: With ETL integration, the data transfer processes are executed during maintenance windows when the data sources are quiesced; this is necessary to ensure that data sources don’t change during data acquisition, which would create inconsistencies. For business-critical systems these batch windows are shrinking while the business operations approach 24/7. Further, handling ever-growing data volumes in these shrinking batch windows is becoming a major challenge for ETL users.
  • Overhead on the source: Source tables are queried and potentially large amounts of data are copied either to other tables for export or directly to flat files in the file system. The net is that the more data extracted, the more resources were required to perform the work.
  • Reliability/recoverability: Because ETL tools are geared towards processing data in batches, they do not effectively maintain the transaction integrity of data in transit. If an ETL process is interrupted, the partially transferred data often cannot be applied.

Q 6: Why it is highly desirable that tables that you want to replicate should have primary key?

In simple words, to uniquely identify a record GoldenGate requires a primary key.

If the primary key does not exist on the source table, GoldenGate will create its own unique identifier by concatenating all the table columns together. This will certainly prove inefficient as volume of data that needs to be extracted from the redo logs will increase exponentially. In normal scenario, when a table has primary key, GoldenGate process will fetch only the primary key and the changed data (before and after images in the case of an update statement).

GoldenGate process will also warn you that primary key does not exist on the target table and  you may receive the following warning in the GoldenGate error log:

WARNING OGG-xxxx No unique key is defined for table ‘TARGET_TABLE_NAME’. All viable columns will be used to represent the key, but may not guarantee uniqueness. KEYCOLS may be used to define the key.

Having primary key also insure fast data lookup when the Replicat recreates and applies the DML statements against the target database. But keep in mind that it is not “mandatory” that primary key must be present for the table.

Q 7: Is it MUST that the source database should be in archivelog mode?

A 7: It is NOT must that the source database is in the archivelog mode but for any serious, mission-critical  GoldenGate system it is almost mandatory to have source system in Archive Log mode.

GoldenGate needS Archivelog mode to allow the mining of its archived redo logs, following a fallback or outage in GoldenGate replication to prevent any data loss.

So for testing/demo purpose you can have source database in nonarchivelog mode since for demo purpose you are not bothered much about data consistencies or data loss but if you are creating GoldenGate setup for your company’s productions system, keep source archivelog mode action as your foremost priority.

Q 8: Without going into details, explain high level steps of setting up GoldenGate.

A 8: Below are the key steps to install/configure the GoldenGate system.

  1. Download the software from the Oracle website and upload to server
  2. Unpack/Unzip the installation zip file
  3. Prepare source and target system
  4. Install the software on the source and target system (for 12c use OUI)
  5. Prepare the source database (some DB parameters need to be adjusted)
  6. Configure the Manager process on the source and target system
  7. Configure the Extract process on the source system
  8. Configure the data pump process on the source system
  9. Configure the Replicat process on the target system
  10. Start the Extract process
  11. Start the data pump process
  12. Start the Replicat process

Q 9: When creating GoldenGate database user for database 12c, what special precaution you need to take?

A 9: You must grant the GoldenGate admin user access to all database containers on the source side so that GoldenGate can access the redo logs for all the databases (container and pluggable)

You must also grant the DBA role with the container=all option.


OGG-06203 Failed to switch to catalog PDB1. OCI Error ORA-01031: insufficient privileges SQL ALTER SESSION SET CONTAINER=PDB1

Q 10: What is the concept of GoldenGate Credential Store in GoldenGate 12c?

A 10:

You can store the GoldenGate user’s database logon credentials using the Credential Store. This way you will not have to store the password in clear text in the Goldengate parameter files.

Simple commands to create a CredentialStore are mentioned below:

ggsci > create wallet
ggsci > Add CredentialStore
ggsci > alter CredentialStore Add User ogg@ORCL Password ogg Alias aogg
ggsci > info CredentialStore
ggsci > dblogin useridalias aogg

Here ogg is the goldengate admin user, ORCL is database SID, aogg is alias. Last command “dblogin useridalias aogg” will connect you to database.

Q 11: Why do you need to increase value of UNDO_RETENTION parameter while setting up GoldenGate?

A 11:  It is required to increase the UNDO_RETENTION value so that GoldenGate can maintain a read-consistent row image for a specific time or SCN to match the redo record. Please note that Oracle GoldenGate 12c uses Flashback Query to obtain all the values from the undo records
for special database objects like User-defined types, Nested tables etc. Increasing UNDO_RETENTION will make sure that GoldenGate is able to meet the goal of read consistency.

Oracle recommends that UNDO_RETENTION parameter should be set to 86400 seconds (24 hours).

Q 12: Which all databases does GoldenGate 12c support?

A 12: GoldenGate 12c Support 11g and 12c databases.

Q 13: What is Downstream capture mode of GoldenGate?

A 13: Traditionally log mining work for the source data happens on Source database side but in Downstream capture mode Oracle Data Guard redo
transport mechanism is used.This enables continuous log shipping to the target database’s standby redo logs in real time. Log mining work to fetch DDL/DML transactions happens on the target side.

One clear benefit of this configuration is that if a failure occurs at Site A, the database log mining server at Site B will continue to process the redo log stream up to the point of failure.

Q 14: How do you take backup of GoldenGate?

A 14: Your source/database you can backup easily using backup tools like Oracle Recovery Manager (RMAN) but to backup the GoldenGate you will need to back up the GoldenGate home and subdirectories that contain the trail files, checkpoint files etc. Without these key files, GoldenGate will not be able to recover from the last checkpoint. It means that if somehow you loose all these key GoldenGate files then you will have no option but to go for a new initial load. RMAN simply do not have capability to backup the OS or nondatabase files.

so either you keep all your GoldenGate related files on some kind of SAN setup which gets backed up daily at storage level or use  Unix shell commands etc in cron job to take filesystem backups..

Q 15: What is checkpoint table? In which capture mode it is used : classic or integrated?

Oracle GoldenGate extract and replicat processes perform checkpoint operations. Now in the event of some unexpected failure, the checkpoint file or database table ensures extract and replicat re-start from the point of failure and avoid re-capture and re-apply of transactions.

So, Checkpoint table enables the checkpoint to be included within Replicat’s transaction, ensuring complete recovery from all failure scenarios.

You use the GGSCI add checkpointtable command to create the checkpoint table.

Checkpoint table is used for Classic capture/replicate mode.
For Integrated mode, the Checkpoint table is not required and should not be created.

Q 16: Why is DISCARDFILE important to be specified in extract/replciate parameters file?

The DISCARDFILE parameter when defined in the parameter files will monitor and capture the rejected rows with the associated cause of failure. This proves very important when you are trying to troubleshoot. In the parameter file you define the size and location of the discard file and GoldenGate engine will start populating this table in case of failure.

Q 17: Your database is already generating archivelogs. Why do you need to generate trail files for GoldenGate? Why can’t you just capture changes at source and apply to target without wasting time in writing changes first to Trail files?

A 17: One of the key aspect of GoldenGate (besides the speed of data movement) is that it maintains data integrity even when any process/network fails.

Trail files are used by Extract and Replicat processes and sole purpose is to maintain data persistence. So even if network between source and target fails for some reason, the GoldenGate process will re-start from exact same point when Network is restored. No data loss and no re-instantiation of tables will be required. This is very critical for a production system.

Also note that GoldenGate writes only the committed transactions to trail files in large blocks which minimizes the I/O.

Q 18: What is Downstream mining database topology in context of GoldenGate? 

A 18: Just like Oracle Streams, where both the capture and apply processes can be configured on the target system, Oracle has provided this capability in GoldenGate version higher than 11gr2. This is also known as source database offloading. In this configuration, the source database sends its redo stream data to the target system. On the target side, he Extract process uses the Logminer server on the “downstream” target database to mine the logs and apply the changes to the target database.

Redo stream data that will get transferred to target database will contain both online and archive logs. This is similar to a Data Guard setup, which is used by Oracle Streams in downstream capture mode which means that for using this setup you will modify LOG_ARCHIVE_DEST_STATE_n on source and target, creating standby redo logs on target database etc.

Q 19: What is BATCHSQL mode and how it can increase performance of GoldenGate?

A 19: When GoldenGate is replicating the changes on the target database, in default mode, the Replicat process will apply SQL to the target database, one statement at a time.Now this can cause a performance issue, where the Replicat process cannot apply the changes quickly enough as compared to the rate at which the Extract process delivers the data.

GoldenGate has addressed this issue through the use of the BATCHSQL configuration parameter for the Replicat process. BATCHSQL segregates similar SQL statements into batches and applies them all at once. Similar SQL statements are those that perform a specific operation type (insert, update, or delete) against the same target table with the same column list.

Note that Oracle Large Objects (LOBs) and rows greater than 25 KB are not supported by BATCHSQL.


[Post Views: 711]

Read Offline: PDF Download

Brijesh Gogia

I’m an experienced Oracle Applications DBA with more than a decade of full-time DBA experience. I have gained a wide knowledge of the Oracle software stack and have worked on several big projects for multi-national companies. I enjoy working with the leading-edge technology and have passion for database performance and stability. Thankfully my work allows me time for researching new technologies (and to write about them).
You can connect with me on LinkedIn.

One Comment

  1. Hamid Ansari Hamid Ansari

    Good set of Questions, Needs to be updated for the new versions of the GG that is a lot more different and offers interesting functions.

Leave a Reply

Required fields are marked *