Tuesday 27 November 2018

Why more redo log generation during Hot Backup – Oracle


Many of you must have heard or experienced that while taking hot backup of database LGWR process writes aggressively. Meaning that more redo data has been written to redo log file and consecutively more archive logs gets generated.

Here is the common misconception we have in our mind. If some one ask, why excessive redo logs and archive logs are getting generated when we start a hot backup of database ?? Quickly we answer .. Its simple, when we put tablespace in hot backup mode, Oracle will take a check point of tablespace and data files belonging to this tablespace will be freezed. Any user activity happening on objects belonging to this tablespace wont write data to these datafiles, instead it will write data to redo log files. So obviously there will be more redo log file generation.
Well, to some extent this is COMPLETELY WRONG !!!!!!!!!
I will straight way come to the point and explain you what happens when we put the tablespace in hot backup mode.
Your first assumption that datafiles belonging to the tablespace in hot backup mode is freezed is wrong. Datafiles are not freezed, only the datafile headers will be freezed !! So simply imagine that when you put the tablespace in backup mode, Oracle will take a checkpoint and update the datafile headers with checkpoint SCN and there after it is freezed until we take tablespace out of backup mode.
Other datafile (other then header part) remains as normal and data changes happens continuously to this datafile.
Now you may want to ask me “do I mean to say that datafiles gets updated continuously even when we are coping the same to backup location ?”. The answer is YES. Never think that the datafile you are coping is “Consistent”. No, datafiles gets changed continuously !!!
You might want to ask couple of more questions then.
1) If we say that backup file is not consistent and changes continuously, then how come Oracle is able to recover the database when we restore that datafile?
2) If the data changes are anyway happening continuously on data files, then why there is excess redo log generation ?
Thats it !!!!! don’t ask me more then this. Let me explain answers to these questions.
Consider a typical case, where an Oracle database is installed on Linux platform. The standard Oracle block size if 8K and lets say that OS level data block size is 512K. Now when we put the tablespace in “Begin Backup” mode checkpoint has happened and datafile header is freezed. You found which are the files related to this tablespace and started copying using OS command. Now when you copy a datafile using OS command it is going to copy as per OS block size. Lets say when you start copying it gave 8 blocks to you to copy – that means you are copying 4K (512K X 4) to backup location. That means you are copying half of Oracle block to backup location. Now this process of copy can be preempted by Server CPU depending on load. Lets say when you started copying after copy of those 8 block (4K, half of Oracle block), your process get preempted by CPU and it has allocated CPU time to some other important process. Mean while DBWR process changes that block that you have copied halfway (since datafile is not freezed and only header is freezed, continuous updates can happen to datafile).
After a while CPU returns back and gives you next 8 blocks to copy (rest of the halk Oracle block). Now here is the problem !!! we copied half of the oracle block taken at time T0 and another half taken at time T1 and in-between the data block got changed. Does this sounds consistent ? Not to me !! Such type of block is called “Fractured Block”.
Well, since Oracle copies files like this it should do some thing, so that during recovery it wont face any problem.
Usually in case of a normal tablespace (which is not in begin backup mode), when a transaction happens oracle generates redo information and puts in redo log file. This is the bare minimum information that oracle generates in order to redo the information. It does not copy the complete block. Where as in case of begin backup mode, if a transaction happens and changes any block FOR THE FIST TIME, oracle copies the complete block to redo log file. This happens only during first time. If subsequent transaction updates the same block again, oracle will not copy the complete block to redo, instead it will generate minimum information to redo the changes. Now because oracle has to copy the complete block when it changes for the first time in begin backup mode, we say that excess redo gets generated when we put tablespace in begin backup mode.
Question arises, why Oracle has to copy the complete block to redo log files. As you have seen above that during copy of datafile, there can be many fractured blocks, and during restore and recovery its going to put those block back and try to recover. Now assume that block is fractured and oracle has minimum information that it generates in the redo. Under such condition it wont be possible for Oracle to recover such blocks. So instead Oracle just copies the entire block back from redo log files to datafiles during recovery process. This will make the datafile consistent. So recovery process is very important which takes care of all fractured blocks and makes it possible to recover a database.
I hope this explains above 2 questions.
Now you can easily explain why hot backup is not possible if database is in NOARCHIVELOG mode.
When you take a backup using RMAN, it does not generate excessive redo logs. The reason is simple. RMAN is intelligent. It does not use OS block for copying, instead it uses oracle blocks for copying datafiles so the files are consistent.
Hope this helps !!!!!!!

How to change DNS in AIX.



Most AIX system administrators use: smitty tcpip to add a hostname, IP address, network mask, gateway and DNS server setting to get the virtual machines (LPAR) and AIX on a network.  And as a side benefit so we don't need to use the ghastly VTERM console any more as ssh or the dreaded unsecure telnet (just don't do it) can now be used - as the root user:
 # smitty tcpip
Move cursor to desired item and press Enter.
  Minimum Configuration & Startup
  Further Configuration
  Use DHCP for TCPIP Configuration & Startup
  IPV6 Configuration
  Quality of Service Configuration & Startup
  Configure IP Security (IPv4)
  Configure IP Security (IPv6)
Once smitty starts, you take the "Further Configuration" Second line menu option:

Move cursor to desired item and press Enter.

  Hostname
  Static Routes
  Network Interfaces
  Name Resolution
  Client Network Services
  Server Network Services
  Manage Print Server
  Select BSD style rc Configuration
  Start Configured TCPIP Daemons
  Stop TCPIP Daemons
  Authentication Configuration
Then the Fourth option "Name Resolution"
Move cursor to desired item and press Enter.
  Domain Nameserver
  Hosts Table (/etc/hosts)
Then the First option "Domain Nameserver"
Move cursor to desired item and press Enter.
  Start Using the Nameserver
  List All Nameservers
  Add a Nameserver
  Remove a Nameserver
  Stop Using a Nameserver
-------------------------------
  Set / Show the Domain
  Remove the Domain
  Set / Show the Domain Search List
  Remove the Domain Search List
Then the Second option "List All Nameservers" for showing existing DNS entry(Coming previous option then press ESC+3).
Then the Fourth option "Remove a Nameserver" for removing old DNS entry.
Then the Third option "Add a Nameserver" for adding DNS entry(For Commit then press ESC+4)

Tuesday 18 September 2018

Error: "Unable to Connect" Oracle Apex Access Issue

Recently we had install the Oracle Apex and installation done successfully and access via local system Oracle Apex running perfectly but while we access in remote then we had faced the problem in Oracle Apex Application running issue.

Error: Unable to Connect


Cause: APEX using the XDB protocol if you use XDB protocols and then need DISPATCHERS but we had not configure the dispatcher.

Solution: One thing to remember, if you are only using XDB services for batch processing or only for intermittent connections you can actually set DISPATCHERS dynamically and turn it on.
This is because XDB protocols like HTTP and FTP require shared server. 

Example: 

dispatchers="(PROTOCOL=TCP)(SERVICE=TESTXDB)"

LOCAL_LISTENER = "(ADDRESS=(PROTOCOL=TCP)(HOST=Hostname)(port=1528))"


Monday 17 September 2018

How to change EBS R12 Database and Application Port Number

Recently we had faced the application port blocking issue in oracle EBS R12.
We had follow the below steps to resolve this issue...

Execute the below in Application Tier:

Run adpreclone on Application Tier
   $ perl ./adpreclone.pl appsTier

Execute the following on the Database Tier:

Logon as oracle database user and set envirnoment variable for R12 RDBMS ORACLE_HOME

Start the database instance for which you want to change R12 port

Backup contextfile

Change as below parameters values in the contextfile

        dbport  : Change database port from 1521 to 1531
        cmanport : 1521 to 1531

Run adautocfg.sh on Database Tier

And also change the port in listener.ora & tnsnames.ora file.

Again Execute the following in Application Tier:

Logon as application user and set environment variable for R12.

Backup context file

Stop all oracle services for Applications by running "adstpall.sh".

Run adcfgclone and choose new port pool
   $ cd $COMMON_TOP/clone/bin
   $ perl ./adcfgclone.pl appsTier

Check new application URL

SQL> select home_url from icx_parameters;

HOME_URL
------------------------------------------------------------
http://<hostname.domainname>:8010/OA_HTML/AppsLogin

ORA-12152 "Unable To Send Break Message" on Connection

When trying to login to the database from a client(Toad, SQL Developer) when using a Cisco GSS and a CSM (Content Switch Module) the connection fails to complete or fails before receiving any data with a network related Error: ORA-12152 "Unable To Send Break Message"
Solution: To resolve such an issue, make sure all switches or firewalls are set to the highest possible value and test. Once all connections are successful, the idle time limit can be tuned downward to a value that won't cause an interruption. This type of configuration would normally be handled by the Network Administrator of a system so this information should be passed on to them for actioning.
Ref:ORA-12152 "Unable To Send Break Message" on Connection [ID 363638.1]
OR
Error: ORA-12152 "Unable To Send Break Message"
Cause: Unable to send break message. Connection probably disconnected.
Action: Re-establish connection. If the error is persistent, turn tracing and re-execute the operation.
Or you can try this:
Add SQLNET.EXPIRE_TIME to the database sqlnet.ora file. This is the file located $ORACLE_HOME/network/admin directory or the directory specified by
The EXPIRE_TIME should be set to the highest value possible that is within the firewall idle connection timeout, to limit the amount of network traffic generated.
For example if the firewall timeout is 20 minutes set: SQLNET.EXPIRE_TIME=15

How to configure XVFB in AIX Environment??

Issue: We are facing concurrent request running continuously not completed issue on live server.

Cause: Due to VNCSERVER got hanged.

Solution: To resolve it we have installed XVFB Server which is the alternative for vncserver.
so we can kill the vncserver process before installation of XVFB.

Resolve this issue by Download & Install below RPMS:
X11.vfb
OpenGL.OpenGL_X.dev
Pre Req RPM PKG for OpenGL.OpenGL_X.dev
-rw-r-----    1 root     system      1402880 Sep 14 14:25 OpenGL.OpenGL_X.adt
-rw-r-----    1 root     system     21670912 Sep 14 14:25 OpenGL.OpenGL_X.dev
-rw-r-----    1 root     system     35701760 Sep 14 14:25 OpenGL.OpenGL_X.rte
-rw-r-----    1 root     system      3738624 Sep 14 14:25 OpenGL.OpenGL_X.tools

Step 1: Installtation Process by root User.
# installp -agcd . X11.vfb

# installp -agcd . OpenGL.OpenGL_X.dev

After installtion check DISPLAY

# export DISPLAY=erpprodappl:0.0
# xclock
Xlib: connection to "erpprodappl:0.0" refused by server
Xlib: Invalid MIT-MAGIC-COOKIE-1 key
Error: Can't open display: erpprodappl:0.0
# hostname
appnode2
# export DISPLAY=172.16.1.22:0.0
# xclock
Error: Can't open display: 172.16.1.22:0.0

Step 2: Run Below command to start XVFB Server.
# /usr/lpp/X11/bin/xinit /usr/lpp/X11/bin/mwm -- :0 -vfb -force -x abx -x dbe -x GLX &
[1]     10813502
#
:0
# xhost + 172.16.1.22
172.16.1.22 being added to access control list

Step 3: check X server running status.
# ps -ef|grep vfb OR  ps -ef|grep X11
    root 10617054 10813502   0 14:34:41  pts/0  0:00 /usr/lpp/X11/bin/X -cc 4 -x abx -x dbe -x GLX -D /usr/lib/X11/rgb :0 -vfb -force -x abx -x dbe -x GLX
    root 10813502  8519794   0 14:34:41  pts/0  0:00 /usr/lpp/X11/bin/xinit /usr/lpp/X11/bin/mwm -- :0 -vfb -force -x abx -x dbe -x GLX

Note: I hope  resolve avobe issue after doing this activity.

Thursday 13 September 2018

How to start the Cluster in AIX Operating System

First of all we check the resource status in AIX OS clutering via below command.
Step1 - for checking resource group on which node

/usr/es/sbin/cluster/utilities/clRGinfo


test again

-----------------------------------------------------------------------------
Group Name     State                            Node
-----------------------------------------------------------------------------
DBRG           OFFLINE                      dbnode2
                      ONLINE                        dbnode1


http://www.unixmantra.com/2013/04/aix-powerha-hacmp-commands.html

Step2 - How to list the cluster manager state
lssrc -ls clstrmgrES

>> Cluster manager states

ST_NOT_CONFIGURED Node never started
ST_INIT Node configured but down - not  running
ST_STABLE Node up and running
ST_RP_RUNNING
ST_JOINING
ST_BARRIER
ST_CBARRIER
ST_VOTING
ST_RP_FAILED Node with event error

     
if ST_INIT Node configured but down - not  running then run below command.

step3 - run on down node.

smitty clstart


step4 - move the cluster from node 1 to node 2

 /usr/es/sbin/cluster/utilities/clRGmove -g DBRG -n dbnode2 -m

step5 - Confirm if the steps are successful

/usr/es/sbin/cluster/utilities/clRGinfo

How To Purge FND_AQ Tables

Symptoms(Doc ID 1156523.1)

1. The Internal standard manager take a long time to startup

2. The table AQ$_FND_CP_GSM_OPP_AQTBL table grow up enormously and has a lot of records
3. The following message is shown for a concurrent request of a bi publisher report:

    Post-processing of request xxx failed with the error message:
    ORA-04021: timeout occurred while waiting to lock object


Changes

Improper shutdown of the concurrent managers especially the OPP manager.


Cause

AQs tables are used to look for "subscriptions" by FNDSMs. That is, when ICM calls for FNDSM to start, they "subscribe" to this queue to identify its status. The time taken for the process cleanup prior to the internal manager start up with the regular CMs is correlated to the number of processes that were not stopped cleanly. In case of un-clean shutdown, the process to restart will be longer as manager spends extra cycles to perform housekeeping tasks.

Solution

1. It is highly recommended to always ensure the clean shutdown of the concurrent managers.
2. The Purge Concurrent Request and/or Manager Data Program request should run periodically. Purge concurrent request does not Purges AQ Tables. For maintaining a healthy level of records in fnd_concurrent_requests instead of running 'Purge Concurrent Program' with same parameters for all the applications you can choose to run it differently for different application where you can decide for which application you needs records to be kept for certain days and for which application you only need to keep for 1 day or so.
It is recommended to schedule a cron job or something which queries records in APPLSYS.FND_CP_GSM_OPP_AQTBL to monitor it and use DBMS_AQADM.PURGE_QUEUE_TABLE to purge the table as needed.
3. This is how to purge the FND_CP_GSM_OPP_AQTBL manually to clean up the table:
3.1. Check the number of records in the table :
SQL> select count(*) from applsys.FND_CP_GSM_OPP_AQTBL ;

 COUNT(*)
 ----------
 31759
3.2. Run the following as SYS:
DECLARE
po dbms_aqadm.aq$_purge_options_t;
BEGIN
po.block := FALSE;
DBMS_AQADM.PURGE_QUEUE_TABLE(
queue_table => 'APPLSYS.FND_CP_GSM_OPP_AQTBL',
purge_condition => NULL,
purge_options => po);
END;
/
3.3. Re-check again the number of records in the table
SQL> select count(*) from applsys.FND_CP_GSM_OPP_AQTBL ;

 COUNT(*)
 ----------
 0
In case the purge did not complete successfully after the second time, or did not purge all the queues, then you would have to recreate the queue. Recreate the queue using  $FND_TOP/patch/115/sql/afopp002.sql file as 'APPLSYS' user.  On running the script you will be prompted for username and password. Please note that this may take a longer length of time to complete.
3.4. Please run the Concurrent Manager Recovery feature to address any Concurrent Manager / Concurrent Processing issues within the Oracle Application Manager.
3.5. Start up the application services and retest.

Note: The following applsys.AQ$_FND_CP_GSM_OPP_AQTBL_x objects are also cleaned as part of the above purge.
applsys.AQ$_FND_CP_GSM_OPP_AQTBL_S
applsys.AQ$_FND_CP_GSM_OPP_AQTBL_T
applsys.AQ$_FND_CP_GSM_OPP_AQTBL_H
applsys.AQ$_FND_CP_GSM_OPP_AQTBL_I
applsys.AQ$_FND_CP_GSM_OPP_AQTBL_G
applsys.AQ$_FND_CP_GSM_OPP_AQTBL_L

For best maintenance and monitoring run the script: Concurrent Processing - CP Analyzer for E-Business Suite (Document 1411723.1)

R12 E-Business Suite Output Post Processor (OPP) Fails To Pick Up Concurrent Requests With Error 'Unable to find an Output Post Processor service to post-process request nnnnn'

E-Business Suite R12 Concurrent Processing, Output Post Processing related issues
Requests fail during Post Processing with the following errors:

'Unable to find an Output Post Processor service to post-process request xxxxx.

Check that the Output Post Processor service is running'

Cause

Incorrect data is present in the OPP queue table AQ$FND_CP_GSM_OPP_AQTBL_S, which occurs when the concurrent managers are not shutdown correctly.

A quick sql query shows orphaned OPP subscribers in APPLSYS.AQ$FND_CP_GSM_OPP_AQTBL_S, causing the select_random_subscriber routine to pick a non-running process.

Solution

To resolve the issue test the following steps in a development instance and then migrate accordingly:
1. Find the active processes for OPP:

SELECT fcp.concurrent_process_id
FROM fnd_concurrent_queues fcq, fnd_concurrent_processes fcp
WHERE concurrent_queue_name = 'FNDCPOPP'
AND fcq.concurrent_queue_id = fcp.concurrent_queue_id
AND fcq.application_id = fcp.queue_application_id
AND fcp.process_status_code = 'A';
Output example:
CONCURRENT_PROCESS_ID
---------------------
93557
93558
2. Find the OPP subscribers:
select name from APPLSYS.AQ$FND_CP_GSM_OPP_AQTBL_S;

The name in the table is the concurrent_process_id prefixed with 'OPP'.
The output should normally correspond to the output of first query: have the same number of records and have as name the concurrent_process_id of the running OPP processes prefixed by OPP

Output example:
NAME
------------------------------
OPP93558
OPP93557
OPP88933
OPP92625
3. If there are extra subscribers which are left from previous runs of OPP processes, unsubscribe them via the following command:
  exec fnd_cp_opp_ipc.unsubscribe('<concurrent_process_id>');
 Example:
  exec fnd_cp_opp_ipc.unsubscribe('88933');
  exec fnd_cp_opp_ipc.unsubscribe('92625');
4. Bounce the concurrent managers.

5. Retest some concurrent requests and confirm the post processing actions now complete successfully.

Thursday 12 April 2018

Profile Set For Output Showing In Oracle Apps R12

Problem: I have faceing one internal issue that getting error while extract the report output.
Solution: Profile Set For Output Showing
Applications Web Agent

The internal name for this profile option is APPS_WEB_AGENT Provides the base URL for the Apps Schema’s WebServer DAD.
You set this profile option during the install process.
This profile option must be set to the URL which identifies the mod_plsql PL/SQL Gateway Database Access Descriptor base URL for your Applications instance.
Oracle Applications use the value of this profile option to construct URLs for ’WWW’ type functions, Attachments, Export, and other features.
Use the following syntax to enter your URL:
https://<hostname>:<port>/pls/<dad_name>
Example:http://erptestappldb.in:8007/pls/TEST

All RDF Reports Running Forever With No Completion

Problem: I face one internal issue that in production instance , concurrent program are running from long time and are not getting completed.

Symptoms:
 All RDF reports like "ACTIVE USER" are running forever with no completion and no error in request/manager/internal manager log.

Submit Active Users with command line: CONCSUB

CONCSUB is a utility to submit the concurrent request from operating system level to run concurrent program, without having to log on to oracle applications.

$CONCSUB apps/<pass> SYSADMIN "System Administrator" SYSADMIN WAIT=Y CONCURRENT FND FNDSCURS PROGRAM_NAME='"Active Users"'

OR

$INST_TOP/ora/10.1.2/bin/appsrwrun.sh userid=apps/apps mode=character report=$FND_TOP/reports/US/FNDSCURS.rdf batch=yes destype=file desname=./ddr2.out desformat=$FND_TOP/reports/HPL pagesize=132x66 traceopts=trace_all tracefile=ddr2.trc tracemode=trace_replace

Get error:
Environment:Environment java.lang.UnsatisfiedLinkError: Can't find library rwu
(librwu.a or .so) in sun.boot.library.path or java.library.path
sun.boot.library.path=/devp/oracle/apps/tech_st/10.1.2/jdk/jre/bin
java.library.path=/devp/oracle/apps/tech_st/10.1.2/jdk/jre/bin:/devp/oracle/apps
/tech_st/10.1.2/jdk/jre/bin/classic:/devp/oracle/apps/tech_st/10.1.2/lib32:/devp
/oracle/apps/tech_st/10.1.2/lib:/usr/dt/lib:/devp/oracle/apps/tech_st/10.1.2/jdk
/jre/bin:/devp/oracle/apps/tech_st/10.1.2/jdk/jre/bin/classic:/devp/oracle/apps/
apps_st/appl/cz/12.0.0/bin:/devp/oracle/apps/tech_st/10.1.2/lib32:/devp/oracle/a
pps/tech_st/10.1.2/lib:/usr/dt/lib:/devp/oracle/apps/tech_st/10.1.2/jdk/jre/bin:
/devp/oracle/apps/tech_st/10.1.2/jdk/jre/bin/classic:/devp/oracle/apps/apps_st/a
ppl/sht/12.0.0/lib:/usr/lib
Environment:getEnvironmentStrings java.lang.UnsatisfiedLinkError: Can't find lib
rary rwu (librwu.a or .so) in sun.boot.library.path or java.library.path
sun.boot.library.path=/devp/oracle/apps/tech_st/10.1.2/jdk/jre/bin
java.library.path=/devp/oracle/apps/tech_st/10.1.2/jdk/jre/bin:/devp/oracle/apps
/tech_st/10.1.2/jdk/jre/bin/classic:/devp/oracle/apps/tech_st/10.1.2/lib32:/devp
/oracle/apps/tech_st/10.1.2/lib:/usr/dt/lib:/devp/oracle/apps/tech_st/10.1.2/jdk
/jre/bin:/devp/oracle/apps/tech_st/10.1.2/jdk/jre/bin/classic:/devp/oracle/apps/
apps_st/appl/cz/12.0.0/bin:/devp/oracle/apps/tech_st/10.1.2/lib32:/devp/oracle/a
pps/tech_st/10.1.2/lib:/usr/dt/lib:/devp/oracle/apps/tech_st/10.1.2/jdk/jre/bin:
/devp/oracle/apps/tech_st/10.1.2/jdk/jre/bin/classic:/devp/oracle/apps/apps_st/a
ppl/sht/12.0.0/lib:/usr/lib
REP-50125: Caught exception: java.lang.reflect.InvocationTargetException

Error in ddr2.trc:

REP-0069: Internal error
REP-57054: In-process job terminated:Terminated with error:
REP-3000: Internal error starting Oracle Toolkit

Solution:

1. Check the E-Business (EBS) server environment DISPLAY as root user with commands: xhost + , xclock , echo $DISPLAY.
2. Change the DISPLAY setting in EBS context file to be value of step 1.
3. Retest the issue.

Wednesday 11 April 2018

Profile Set for enable/disable Report Output generation

Problem: Profile Set for enable/disable Output generation

Solution: Profile Set For Output Showing
Profile Name: Applications Web Agent
The internal name for this profile option is APPS_WEB_AGENT Provides the base URL for the Apps Schema’s WebServer DAD.
You set this profile option during the install process.
This profile option must be set to the URL which identifies the mod_plsql PL/SQL Gateway Database Access Descriptor base URL for your Applications instance.
Oracle Applications use the value of this profile option to construct URLs for ’WWW’ type functions, Attachments, Export, and other features.
Use the following syntax to enter your URL:
https://<hostname>:<port>/pls/<dad_name>
Example:http://erptestappldb.in:8007/pls/TEST

Address data entered in HRMS not visible in the CRM Resource Manager

Problem:
When an employee is created in HRMS, address of the employee will be
entered in the address button.
When the employee is imported as CRM Resource in CRM Resource Manager,
address of the corresponding resource could not be seen in Miscellaneous tab

Solution:

To resolve the issue do the following

Step 1:
Navigation: Super HRMS Manager > People > Enter and Maintain > (B) Address
Ensure that "Primary" check box is enabled in address window of HRMS

Step 2:
Navigation: System Administrator > Profile > System
Set the following Profile Options
1) HZ: Protect HR Person Information = No
2) JTFRS: Hide Sensitive HR Data = No
The above profile option can be set either at Site level or at
Responsibility level

Step 3:
Navigation: System Administrator > Application > Function. Query the
function "JTFRSDEF". Go to Form tab and enter the below parameter and save
Pass the parameter "P_HIDE_SENSITIVE_HR_DATA=N" to the Define Resources
form.

Now, go to Miscellaneous tab in CRM Resource Manager and view the
address data.

Wednesday 7 February 2018

R12 : All Concurrent Request are Running Normal Forever not Completing


I face one internal issue that all standard concurrent requests in EBS R12.1.3  are not completing and its continuously in status "Running Normal" forever. This issue was reported on of their recently cloned TEST Environment.
Environment details:
EBS : 12.1.3
DB   : 11.1.0.7
OS   : IBM-AIX -7.1 (64-bit)
If we check the log file for any standard concurrent request there is no error reported. Below is the concurrent manager logfile for Active users:



 The concurrent request will stuck with no further progress.
As a workaround tried restarting database, concurrent manager, restarted all Application services and re linking libraries. But none of them worked.
Cause:
To troubleshoot further tried executing Standard Concurrent request "Active Users" from CLI.

$ $APPLORC userid=apps/ap**** mode=character report=$FND_TOP/reports/US/FNDSCURS.rdf batch=yes destype=file desname=FNDSCURS.txt desformat=$FND_TOP/reports/HPL.prt 

REP-3000: Internal error starting Oracle Toolkit.
REP-3000: Internal error starting Oracle Toolkit.

Report Builder: Release 10.1.2.3.0 - Production on Wed Feb 7 12:22:28 2018

Copyright (c) 1982, 2005, Oracle. All rights reserved.

REP-0069: Internal error
REP-57054: In-process job terminated:Terminated with error:
REP-3000: Internal error starting Oracle Toolkit.

Note :-
==========
Also, could you check the user permissions and dba/os group for the $APPLCSF/APLLOUT and APPLLOG directories.

Solution:

The problem is with VNCSERVER files.

Usually Rep - 3000 errors occur when the DISPLAY is not set corrrectly.

Please verify if this is the correct value set for DISPLAY = erptestappldb 0.0
++++++++++++++++++++++++++++

Perform the below action plan

1. Login in to putty as applmgr user in the AIX machine which is running the EBS application. On this machine a vncserver software has to be installed.

2. Start a new vnc server process as applmgr user with a new port eg. vncserver :3 or ebis:0.0

3. Start the vncviewer (from your client desktop) and login as applmgr user on the new port

4. Check echo $DISPLAY to confirm the display value

5. Type xclock and check whether clock is displayed

6. Once all the above actions are successful, open the applications context file on all nodes and change the s_display variable to the new value displayed on vncviewer

7. Save the changes

8. Shutdown the application services and run autoconfig on all nodes

9. Edit the . .profile file to change the DISPLAY to new value and save it

10. Restart the application services and retest the issue by running the report and confirm if the REP - 3000 error occurs still.

Friday 12 January 2018

How to Recover Drop/Truncate table from database

If by mistakenly droped any table from database and our database size in TB then how to recover that droped table then we can use given below steps.

Step 1
We need valid backup of droped/Truncated table on Source.

Step 2.
Backup Transfer to another Test/Dev/UAT server.

Step 3. We make the Fresh new instance for recovery purpose and doing some entry in below file.
ORACLE_USER$vi /etc/oratab (Make the Entry in oratab file dummy Entry).

Step 4.
ORACLE_USER$. oraenv (then provide the dummy SID name that made the enty).

Step 5.
ORACLE_USER$rman target /

Step 6.
RMAN>startup nomount force ----command on rman.

Step 7.
RMAN>restore spfile from 'Backup Path'.

Step 8.
SQL>create pfile from spfile; ---command.

Step 9.
ORACLE_USER$ vi $ORACLE_HOME/dbs/init_SID(changes the some parameter values.i.e controlefile,datafile,redolog etc.)

Step 10.
SQL>create spfile from pfile.

Step 11.
ORACLE_USER$ kill -9 (pmon)

Step 12.
ORACLE_USER$. oraenv
ORACLE_USER$rman target /

Step 13.
RMAN>startup nomount.

Step 14.
restore controlfile from 'Backup Path'.

Step 15.
RMAN>alter database mount.

RMAN>run
{
set newname for datafile 1 to '/u01/app/oracle/oradata/prod/system01.dbf';
set newname for datafile 2 to '/u01/app/oracle/oradata/prod/sysaux01.dbf';
set newname for datafile 3 to '/u01/app/oracle/oradata/prod/undotbs01.dbf';
set newname for datafile 4 to '/u01/app/oracle/oradata/prod/users01.dbf';
restore tablespace system, sysaux, undotbs1, users;
switch datafile all;
sql "alter database datafile 1,2,3,4 online";
recover database skip forever tablespace TEMP,APEX_1680608425593888,APEX,example;
}

--------if directory same then no need to rename of redo otherwise  rename the redo log---------------
sql "alter database rename file ''/u01/app/oracle/oradata/prod/REDO01.LOG'' to ''/u01/app/oracle/oradata/REDO01.LOG''";
sql "alter database rename file ''/u01/app/oracle/oradata/prod/REDO02.LOG'' to ''/u01/app/oracle/oradata/REDO02.LOG''";

Step 16.
SQL> alter database open resetlogs.

Step 17.
use expdp/impdp