Wednesday, December 19, 2007

Need to Kill j000 process to shutdown the database after upgrade from 8.1.7.4 to 9.2.0.8.

This is a common problem after the upgrade. One important point to remember is to disable all the schedules for dbms jobs before kicking off the upgrade and enable the schedule after the uprade is over. Alternatively job_queue_processes parameter can be set to '0'.

You see following symptoms in the alert log.

-- Clip --

Active call for process 21244 user 'oracle' program 'oracle@hostname (J000)'
SHUTDOWN: waiting for active calls to complete.


-- End Clip --

Trace File of J000 process shows

-- Clip --

Unix process pid: 13302, image: oracle@hostname (J000)

*** SESSION ID:(95.9333) 2007-12-14 22:50:52.282
OPIRIP: Uncaught error 1089. Error stack:
ORA-01089: immediate shutdown in progress - no operations are permitted
ORA-00448: normal completion of background process


-- End Clip --


Reference - Metalik Note:342805.1

Friday, December 14, 2007

Autopatch Timing Report. Useful information that helps you to plan your Production Downtime.

I was all set to write a shell script that will parse the patch log file and arrive at top time consuming sqls that will help me know the time taken by the sqls for the upgrades. After constantly watching the Maintenance Pack ( 3480000 ) session for 30+ hours. The following lines of the autopatch session caught my attention.

-- Clip --

sqlplus -s APPS/***** @/xxxx/xxxx/xxxxappl/ad/11.5.0/admin/sql/adtimrpt.sql 5578 adt05578

A job timing report has been generated for the current session.
You should check the file
/xxxx/xxxx/xxxxappl/admin/XXXXXX/out/adt05578.lst

for details.

-- End Clip --

Out of curiosity I opened the file "adt05578.lst", I saw the information that I was exactly looking for.

-- Clip --

Job Timing Report for AutoPatch session 5578 14-DEC-2007 19:37

Summary page 1

Jobs that ran successfully on the first try : 96048
Jobs Failed, deferred, then run successfully : 35
Jobs Failed, restarted, then run successfully : 3
Jobs Failed and Skipped : 0

Total number of jobs : 96086


^L
Job Timing Report for AutoPatch session 5578 14-DEC-2007 19:37

Top 100 Time Consuming Jobs (out of 96086 total jobs) page 1

Elapsed
Task Time Wrk Start Time
Num Prod Job (HH:MM:SS Phase Id (Mon DD YYYY HH:MI:SS)
----- ------ ------------------ ---------- ------------ ---- -----------------------
0 inv inviusi1.sql 3:25:18 dat+10 16 Dec 13 2007 21:28:57
0 inv inviusi1.sql 3:25:15 dat+10 10 Dec 13 2007 21:28:57
0 inv inviusi1.sql 3:25:15 dat+10 9 Dec 13 2007 21:28:59
0 inv inviusi1.sql 3:25:13 dat+10 18 Dec 13 2007 21:28:57
0 inv inviusi1.sql 3:25:08 dat+10 3 Dec 13 2007 21:28:59
0 inv inviusi1.sql 3:25:08 dat+10 13 Dec 13 2007 21:29:06
0 inv inviusi1.sql 3:25:07 dat+10 8 Dec 13 2007 21:28:57
0 inv inviusi1.sql 3:24:58 dat+10 19 Dec 13 2007 21:29:17
0 inv inviusi1.sql 3:24:57 dat+10 27 Dec 13 2007 21:29:01
0 inv inviusi1.sql 3:24:57 dat+10 6 Dec 13 2007 21:29:18
0 inv inviusi1.sql 3:24:56 dat+10 12 Dec 13 2007 21:28:58

-- End Clip --


Summary section that this log shows is important and can be a source of truth for auditing purposes.

My Experience with Oracle Service Contracts (OKS) Rule Migration Metalink Note:265048.1

Does this Note:265048.1 sound familiar ? At least I can't forget this Note#. I can tell this Note# even if someone wakes me up in the middle of the night. This is because I have spent so much of time and iterations doing Service Contracts Rule Migration before 11.5.10.2 upgrade to cut down the time the Maintenace Pack (Patch# 3480000) takes.

Coming to the technical aspect of this Note#, looks like there is an architecture change the way Service Contracts are maintained in 11.5.8 vs 11.5.10.2. So, all the customers having huge volumes of data/Contracts are supposed to follow the Note#265048.1 and migrate the existing data to the new tables. Out of my own experience I have arrived at the following "Best Practices".

#1. First and foremost, read and understnad the Note#265048.1 and know the importance of this module. Sit with CRM users and understand how they use this module in their daily work. Don't be in a rush to apply the patches mentioned in the note.

#2. Before you get your hands on Service Contracts Rule Migration, get hold of a Service Contracts Functional Expert. OKS Functional expert needs to submit few concurrent requests and validate Service Contracts Data (Especially, Coverage Times and Reaction Times of a Contract) during the process.

#3. Define roles & responsibilities for the "Service Contracts Rule Migration" process documented in Note#265048.1 and agree in black & white in a meeting with all the stake holders. I'm over emphasizing it here. A email will do with the agreement. ;)

#4. If the Appliations and Database you are upgrading is 11.5.8 and 8.1.7.4 then make sure to add sufficient space for tablespaces OKSD and OKSX before you kick of the Section:3 Patch mentioned in Note:265048.1. This patch is the one which does the bulk data processing. In other words it inserts the data into the new tables. CAUTION !!! Do not add the datafiles with autoextend ON for OKSD and OKSX tablespaces. You might hit a bug:2229895 while the patch sqls do an insert and you will see the following ORA-00600 error in the alert log.

"ORA-00600: internal error code, arguments: [ktfbtgex-7], [952329], [5], [952328], [], [], [], []"

Refer to My Post "Importance of keeping a eye on alert.log while applying Oracle Applications Patches"

#5. Never ever apply these patches in a telnet or ssh session. Use VNC to apply the patches mentioned in Section:2 and 3 as it takes few hours. (> 10 hours).

#6. Document timings and errors you hit into it.

#7. Not but not the least Never skip any ADPATCH worker. If you do, you pay for it.

This is my experience with OKS in a nut shell. Even though this post looks theoretical, it is lot practical. :)

nJoy !!


Wednesday, December 12, 2007

Revisting My Previous Post "DBUA FAILS WITH unable to extend rollback segment ODMA_RBS01"

I thought i'll add in more details to the post "DBUA FAILS WITH unable to extend rollback segment ODMA_RBS01".

Just changing the entry "
InNewTablespace name="ODMA_RBS" size="50" unit="MB"" in
$ORACLE_HOME/assistants/dbma/mep.cfg is not sufficient. There are couple of more tags that you need to play with in mep.cfg. Refer to "CreateRollbackSegmentsODMA_RBS01" revert="true" " section in the mep.cfg and pay attention to the following tags.

#1. Maxsize value="90" unit="MB"
#2. MaxExtents value="30"

Note: All the values mentioned above are default values.

I ran into an issue when I changed maxsize to "20000" (20 Gb). Where 20Gb of ODMA_RBS was my requirement. DBUA fails with the following message.

--Clip--
CREATE TABLESPACE ODMA_RBS DATAFILE '$ORACLE_HOME/oradata/SID/odma_rbs.dbf' SIZE 2147483648 REUSE AUTOEXTEND ON NEXT 10485760 MAXSIZE 20971520000 default STORAGE ( INITIAL 3145728 NEXT 3145728 M
INEXTENTS 1 MAXEXTENTS 100 )

*
ERROR at line 1:
ORA-02494: invalid or missing maximum file size in MAXSIZE clause

--End Clip--

Solution: Looks like when DBUA interprets mep.cfg it converts the above values that were changed in bytes. So, when it does create tablespace command doesn't recognize a byte equivalent value of more than 4Gb ( Reference
Note:207964.1 ). I haven't got a chance to explore more about it. When I do, you will see a new post :-) in my blog. Till then restrict yourself to have maxsize to 4Gb or lesser value in mep.cfg.

Enjoy !!

Tuesday, December 4, 2007

adpcpcmp.pls - Takes an hour while applying any patch after upgrading to 11.5.10.2

Any simple problem after the upgrade that is not resolved will have long term maintenance related issues in production.

It is very important to watch the alert log while applying Oracle Applications patch as it gives wealth of information. As a matter of practice I watch the alert log while applying a patch and noticed that Invalid's Compilation stage of adpatch performed by "adpcpcmp.pls" was taking 60 mins. The alert log watch of mine caught the following error.

-- Clip --

Wed June 5 11:55:00 2007
Waited too long for library cache load lock. More info in file /xxxx/xxxx/udump/orcl_ora_7064.trc.

-- End Clip --

The trace file shows the following object

-- Clip --

LIBRARY OBJECT HANDLE: handle=3b3a2f458
name=TESTDB.MY_TEST_TABLE@db_link_name
hash=7e342709

-- End Clip --

Thats all was needed for me to cut down the compilation time from 60 mins to 10 mins. The host referenced in SID used by the above database link was not reachable.

So, watching the alert log while applying Oracle Apps patches pays off !!.

Enjoy !!

Thursday, November 29, 2007

DBUA FAILS WITH unable to extend rollback segment ODMA_RBS01

--Clip--

BEGIN
2 dbms_prvtaqis.upgrade_rulesub_msgs;
3 END;
4 /
BEGIN
*
ERROR at line 1:
ORA-01562: failed to extend rollback segment number 12
ORA-01650: unable to extend rollback segment ODMA_RBS01 by 384 in tablespace ODMA_RBS
ORA-06512: at "SYS.DBMS_PRVTAQIS", line 3507
ORA-06512: at "SYS.DBMS_PRVTAQIS", line 3515
ORA-06512: at line 2

--End Clip--

If the Oracle Applications used database is having many messages in the AQ tables, DBUA fails with above error message. For some reason if you you add additional space to ODMA_RBS from the backend while DBUA is doing its job, the additional space that is added is not recognized.

Workaround :

So, workaround is to edit the file $ORACLE_HOME/assistants/dbma/mep.cfg and make changes to the following XMLtags before you kick off the DBUA.

InNewTablespace name="ODMA_RBS" size="50" unit="MB"

Change the size to the size you desire and then launch DBUA. It should solve your problem.

Enjoy !!

Wednesday, November 28, 2007

After 9.2.0.8 patchset adgrants.sql fails "O/S Message: No such file or directory"

SYMPTOM:

sqlplus "/ as sysdba" @adgrants.sql

SQL*Plus: Release 9.2.0.8.0 - Production on Sun Nov 18 16:14:16 2007

Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.


Connected to:
Oracle9i Enterprise Edition Release 9.2.0.8.0 - 64bit Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.8.0 - Production

O/S Message: No such file or directory

SQL*PLUS exits O/S message "No such file or directory" or "Permission denied"


Solution:

Apply the RDBMS patch 5495695. Review the README.txt for instructions.

Monday, October 15, 2007

WF_DEFERRED Queue is having few million messages in "Ready" State

I would like to publish this blog post in the form of a set of questions and answers that will help you resolve a situation if "Ready" state messages in WF_DEFERRED queue are in few millions and Workflow Deferred Agent listener is down from years which caused the pile up of records in WF_DEFERRED Queue.


Q1. How do I find out in first place WF_DEFERRED Queue is having millions of records in "Ready" state ?

The simplest way to find out this is by executing the script $FND_TOP/sql/wfver.sql. In the spool file that the script creates, look for

--Clip--

Notification Subsystem Queue Info This requires package WF_QUEUE to be valid ______________________________________________________________________________ WF_DEFERRED messages ready:8406878, waiting:16, expired:0, undeliverable:0, processed:2586

--End Clip--

The highlighted one indicates the number of messages in "Ready" state. If you see big numbers in this section of wfver.sql output, this is a matter of concern. Get ready to do an operation ;)

Q2. What happens if the number of messages in this queue are very high.

One of the obvious things that can happen when this queue is so big is the delay in processing of the messages. Just imagine if the Order Management related messages are not processed on time and delay of few hours in notifying the users.

Q3. What is the reason for this queue to grow up in such huge numbers ?

"Workflow Deferred Agent Listener" is the one which processes the messages in WF_DEFERRED queue. If the listener is down, you should try and bring it up. If the number of messages in WF_DEFERRED queue are very huge then this listener fails to come up or dies immediately after few minutes.

Q4. How do I bring up "Workflow Deferred Agent Listener" in this situation.

Believe me. It is not easy.

Step 1: Execute the following script to find out event wise "Ready" state messages.

--Script Begins here--

set linesize 100
col corrid format a50
col state format a15
col count format 99999999
select NVL(substr(wfe.corrid,1,50),'NULL - No Value') corrid, decode(wfe.state,0,'0 = Ready',1,'1 = Delayed','2=Retained','3=Exception',to_char(substr(wfe.state,1,12))) State, count(*) COUNT from applsys.wf_deferred wfe group by wfe.corrid, wfe.state ;

--Script Ends here--

Note: This script might take longtime depending up on the number of messages in WF_DEFERRED Queue.

You should see the output something like this.

CORRID STATE COUNT
------------------------------------------------------------ -------------------- ---------
APPS:oracle.apps.ar.adjustments.Adjustment.create 0 = Ready 72488
APPS:oracle.apps.ar.applications.CashApp.apply 0 = Ready 557692
APPS:oracle.apps.ar.applications.CashApp.unapply 0 = Ready 145695
APPS:oracle.apps.ar.applications.CreditMemoApp.app 0 = Ready 58107
APPS:oracle.apps.ar.applications.CreditMemoApp.una 0 = Ready 2512
APPS:oracle.apps.ar.batch.AutoInvoice.run 0 = Ready 32950

Step 2: This step is to process the events manually. The job of "Workflow Deferred Agent Listener" is being carried out manually by executing the following scripts.

--Script Begins here--

spool oracle.apps.ar.adjustments.Adjustment.create.log
set serveroutput on size 100000;
begin wf_log_pkg.wf_debug_flag := TRUE;
wf_event.listen(p_agent_name =>'WF_DEFERRED',p_correlation=>'oracle.apps.ar.adjustments.Adjustment.create');
end;
/
commit
/

--Script Ends here--

Execute the above script for all the events that are in the "Ready" state. Once after the count of "Ready" state messages comes down to some number, try starting "Workflow Deferred Agent Listener". This process is really time consuming, so plan out your vacation after the issue is resolved. You need a break !! :)

Q5. What do you as an Apps DBA need to do when users complain that OM notifications are stuck.

Use the following query to check to see whatever the users are saying is correct ;).

SQL> select message_type,count(1) from wf_notifications where status='OPEN' and mail_status='MAIL' group by message_type;

MESSAGE_ COUNT(1)
-------- ----------
FNDCMMSG 6
IBEALERT 32
JTFTASK 2441
OEOH 112
OEOL 462
OKCAUKAP 116
POAPPRV 121
REQAPPRV 40

MESSAGE_ COUNT(1)
-------- ----------
WFERROR 4524

Q6. What can you do to speed up the processing of sending notifications.

Execute the following script manually to speedup delivering of notifications.

--Script Begins here--

spool oracle.apps.wf.notification.log
set serveroutput on size 100000;
begin wf_log_pkg.wf_debug_flag := TRUE;
wf_event.listen(p_agent_name => 'WF_DEFERRED',p_correlation=>'oracle.apps.wf.notification.%');
end;
/

--Script Ends here--

Enjoy !!



Thursday, October 4, 2007

Worflow Background Process errors with ORA-00600 internal error code, arguments: [4414]

Workflow Background Process concurrent request fails with the following error message when the request is submitted with Order Management Item type.
Symptom:
ORA-00600: internal error code, arguments: [4414], [12226], [1], [12226], [1], [], [],
ORA-01403: no data found

Workflow Background Process concurrent request is a Workflow Background Engine. Workflow Backedground Process concurrent request needs to be scheduled at regular intervals as it processes deferred activities and timed out activities.

Generally the 'Workflow Background Process' fails with the following error after upgrading to Oracle Applications 11.5.10.2 and if the database is upgraded from 8.1.7.4 to 9i.

What does "ORA-600 [4414] " followed by an "ORA-01403: no data found" mean ?

If a queue has messages enqueued for rule based subscribers prior to the upgrade, then dequeueing these messages on the upgraded database results in an "ORA-600 [4414]" followed by an "ORA-01403: no data found" error.

Reference from Metalink Note:200983.1

Solution:


sqlplus "/ as sysdba"
SQL> exec sys.dbms_prvtaqis.upgrade_rulesub_msgs;


This should solve the issue.

The package "
dbms_prvtaqis.upgrade_rulesub_msgs" is not available in 8i, but available in 9i. The package "dbms_prvtaqis.upgrade_rulesub_msgs" upgrades "Rule Based Subscriber Messages" to 9i.

Enjoy !!

Wednesday, October 3, 2007

Package 'CSI_ITEM_INSTANCE_PVT_W' Invalid after 11.5.10.2 Upgrade

Symptom:

If the existing .pls in your environment has the following version & you have just upgraded to Oracle Apps 11.5.10.2, then read further.

$ strings -a ./patch/115/sql/csiviiws.pls | grep -i Header
REM $Header: csiviiws.pls 115.4.1159.2 2006/03/10 10:04:55 abhgupta ship $
/* $Header: csiviiws.pls 115.4.1159.2 2006/03/10 10:04:55 abhgupta ship $ */
procedure construct_inst_header_rec(p_inst_id NUMBER

SQL> alter package apps.CSI_ITEM_INSTANCE_PVT_W compile body ;
Warning: Package Body altered with compilation errors.

SQL> show error;
Errors for PACKAGE BODY APPS.CSI_ITEM_INSTANCE_PVT_W:

LINE/COL ERROR
-------- -----------------------------------------------------------------
997/5 PLS-00306: wrong number or types of arguments in call to
'GET_PARENT_SORT_ORDER'
997/5 PL/SQL: Statement ignored
2855/5 PL/SQL: Statement ignored
2855/22 PLS-00302: component 'VERSION_LABEL' must be declared
2856/5 PL/SQL: Statement ignored
2856/22 PLS-00302: component 'VERSION_LABEL_MEANING' must be declared
2998/5 PL/SQL: Statement ignored
2998/33 PLS-00302: component 'VERSION_LABEL' must be declared
2999/5 PL/SQL: Statement ignored
LINE/COL ERROR
-------- -----------------------------------------------------------------
2999/33 PLS-00302: component 'VERSION_LABEL_MEANING' must be declared
3279/5 PL/SQL: Statement ignored
3279/22 PLS-00302: component 'VERSION_LABEL' must be declared
3280/5 PL/SQL: Statement ignored
3280/22 PLS-00302: component 'VERSION_LABEL_MEANING' must be declared
3420/5 PL/SQL: Statement ignored
3420/33 PLS-00302: component 'VERSION_LABEL' must be declared
3421/5 PL/SQL: Statement ignored
3421/33 PLS-00302: component 'VERSION_LABEL_MEANING' must be declared
3568/5 PLS-00306: wrong number or types of arguments in call to
'ROSETTA_TABLE_COPY_IN_P22'
LINE/COL ERROR
-------- -----------------------------------------------------------------
3568/5 PL/SQL: Statement ignored

Solution:

Apply the patch:6264601. The patch:6264601 might make CSE_PROJ_ITEM_IN_SRV_PKG / CSE_PROJ_TRANSFER_PKG invalid as well. If at all these packages go invalid. No need to worry. Follow the Metalink Note:403993.1 to fix CSE_PROJ_ITEM_IN_SRV_PKG / CSE_PROJ_TRANSFER_PKG.

Saturday, August 18, 2007

Mike Shaw's 10 Tips for protecting your 'APPS' password

10 Tips for protecting your 'APPS' password - is a very nice post by Oracle Corp - Mike Shaw. Out of the 10 tips in the post by M.Shaw, #7 & #10 are my favourites.

#7. Ensure no processes are running with APPS username/password in command line - This is very important as Apps DBAs are always tempted to use sqlplus apps/apps-password at unix command prompt. Any other users who are not supposed to know the apps password but have unix access can easily find out 'APPS' password by simply firing the command 'ps -ef | grep -i apps'. So, Apps DBAs - Watch out !! :)

#10. Allow only specific IP addresses to access RDBMS via SQLNET - In large enterprises the IP traffic is controlled using firewall. It is always a good idea to allow traffic from a combination of Midlle tier IP address + DB Port #.

Over a period of time the implemented protection has to be maintained. A typical Apps DBA's work life is so busy with Patches, User Calls, Upcoming Project dead lines etc., by the time he realizes that there are so many things to do , the clock ticks 07:00PM. It is time to go home. So, taking time out to proactively go and check the protection is little not practical. This is where monitoring comes into picture.

What needs to be monitored ?

It is always said that you need to be paranoid to monitor something that needs to be protected. The following are some tips I can think of to protect the 'APPS' password not to fall in wrong hands. Even if it falls, how to catch it fast.

#1. The most obvious one is to ensure $APACHE_TOP/Apache/modplsql/wdbsvr.app & $ORACLE_HOME/reports60/server/CGIcmd.dat has 700 permission. Let a monitoring script check at regular intervals for expected permissions and send e-mail/SMS alerts.

#2. There is always one place that the 'APPS' password gets recorded (Even best of the best security guides will fail to document this place) which is .sh_history or .bash_history depending on the default shell type of applmgr unix user account. It is always a good practice to clear the history upon logout as the way to stop Oracle Apps is to use adstapll.sh apps/apps-password.

bash shell (bash)- "~/.bash_logout" - this file called by bash shell upon logout. Place "rm $HOME/.bash_history" to clear the history upon logout.

korn shell (ksh) - In korn shell I think there is no file that automatically gets called during the logout process. So, alias 'exit' to 'alias exit='rm $HOME/.sh_history; exit'.

#3. Recently I learnt that, sql sessions to 'APPS' can be monitored as well. I think this is the best way to check the 'APPS' database sessions. So, have the following script in place to check 'APPS' Database sessions for unauthorized access.

Script Courtesy : My fellow Apps DBAs.

select s.sid "SID" , s.serial# "SERIAL#", s.username dbuser, s.osuser,s.machine "MACHINE", s.terminal "TERMINAL", to_char(s.logon_time,'DD-MON-YYYY HH24:MI:SS') logged_in,s.program , s.module from v$session s where s.username = 'APPS' and s.module in ('TOAD.exe','PL/SQLDeveloper','SQL*Plus') and lower(s.osuser) not like '%osusername' and lower(s.osuser) not in ('oracle user','applmgr user');

Wednesday, August 15, 2007

vi for Apps DBAs

I intend to start of my first Blog with "vi for DBAs". I used to receive queries ( not sql queries :) ) from fellow DBAs on how to search and replace in "vi". A collection of complex (which I think are) search and replace commands of vi that I came across are summarized below. These commands are useful for DBAs in their day-to-day Administration.

#1. Once upon a time a user sent me an excel sheet with 100 tables in it. Requesting me to grant INSSERT,UPDATE & DELETE privileges for two different users. Just imagine the complexity if you are not familiar with vi. I opened up "vi" (my favourite editor), snipped & pasted all the 100 tables into a file. Now my requirement is to generate a grant sql script with the following sql query.

SQL> select 'grant insert on' table_name 'to xyz user' from dba_tables where table_name in ('TABLE1','TABLE2',.....)

Just to high-light every table in the above sql need to be quoted using ' ' and separated by a ",". To achieve this for all the tables that are already copied into to a file I used the following command.
:1,$ s/.*/'&',/g

Explanation:

".*" - indicates any text/pattern in the line. "&" - indicates the text that is already in the line. So, if the line has TABLE1 it will be replaced by 'TABLE1', (notice the quote and comma).

#2. How to search and replace a text containing lots of "/"s.

Assume the search string is "/prod1/applmgr/prod1appl" and replace string is "/test1/applmgr/test1appl". If you use the syntax :1,$ s//prod1/applmgr/prod1appl/test1/applmgr/test1appl/g. Obviously "vi" is going to get confused and errors out. Because it will not know which "/" is part of the search string and which "/"is a delimiter that is part of the search-replace syntax.

So, use the syntax mentioned below which will make you smile.

:1,$s#/prod1/applmgr/prod1appl#/test1/applmgr/test1appl#g.
"#" is part of the search-replace syntax. In other words you can use any character as a delimiter as long as the character that you are using as delimeter is not part of the search/replace string. See, how flexible the "vi" is....

#3. How to do a copy and paste across files ?

Assume, we have two files a.txt and b.txt. Our goal is to copy the text from a.txt and paste it onto b.txt. In "vi" you can define buffers and even name them. Surprising isn't it !!. Yes you can. Where each buffer can be named after numbers 1-9 or alphabets a-i. So,follow the procedure mentioned below to copy and paste across files.

#1. First step in this process of search and replace is to define a buffer and put some content into it.
#2. vi a.txt and "a2yy (Remember #1. Buffer Name can be a-i or 1-9). So, in this case I have chosen buffer name to be "a". 2yy indicates 2 lines to be copied/yanked. #3. Now open b.txt by using :e b.txt - once b.txt is opened, go to the line you would like to have the lines pasted and type "ap. This will recall the contents of buffer "a" and the contents will be pasted.

Tip: Must be wondering how to remember the syntax ? It is easy. All you need to do is use it couple of times.

Enjoy !! & Happy vi'ing.