nv-l
[Top] [All Lists]

Re: Large value_info.pag file

To: nv-l@lists.tivoli.com
Subject: Re: Large value_info.pag file
From: lclark@us.ibm.com
Date: Mon, 3 Apr 2000 09:44:05 -0400


Your problem restoring is probably because Netview uses the sparse
filesystem. So, while you may backup using tar, you should use the pax
command to restore. This way the restore will not take more space than
the original data did. The syntax is:
pax  -rp  e  -f  <tarfile>
Using this command you should be able to restore the large database
that contains your cut/paste work.
Then you should probably consider switching to the turbo form of the
database. I will attach a document that Support has issued  comparing
the various types of databases and compression utilities.
(See attached file: nvbak.txt)


Cordially,

Leslie A. Clark
IBM Global Services - Systems Mgmt & Networking
(248) 552-4968 Voicemail, Fax, Pager


---------------------- Forwarded by Leslie Clark/Southfield/IBM on
04/03/2000 09:35 AM ---------------------------

"Jewan, P. (Pritesh)" <PriteshJ@nedcor.com>@tkg.com on 04/03/2000 04:38:17
AM

Please respond to IBM NetView Discussion <nv-l@tkg.com>

Sent by:  owner-nv-l@tkg.com


To:   "'nv-l@tkg.com'" <nv-l@tkg.com>
cc:
Subject:  [NV-L] Large value_info.pag file





Hello people,

We recently reinstalled our Netview machine with Framework 3.6 and Netview
5.1.2 on AIX 4.2.1. After we rediscovered our network and customised the
maps(which is a massive task for our network), we noticed that the
value_info.pag file was just under 2gig's in size( 1,828,765,996 to be
specific). However, we managed to backup the database to tape and then run
the compress object database option from the Tivoli desktop. Once the
compress had completed certain daemons would not start and we had to
restore from tape, however everytime the tar got to the value_info.pag  it
would complain that there is not enough space in the file system to resotre
value_info.pag, but the file system we were restoring to had 3 gig's of
space. So we finally had to thrash the database and start from scratch. Now
that the process has completed again the value_info.pag has shot back to
more or less 1.8 gig's.

We have about 20000 object in the database, 'du -a value_info.pag' = 33680.
Is the 'compress object database' the right option to run and  what else
should we have run with it ? Why would this option have corrupted the
database ? Also should we try using the dbmcompress utility if so which
options should we run it with? Does anyone know the reason why this file
get so large?

Any help/suggestion would be greatly appreciated.

Regards
Pritesh Jewan
ESM - Technology & Operations Division
Nedcor Bank Limited (South Africa)

Tel : +27 - 011 - 320 5417
Cell: +27 - 82 570 5046
Fax : +27 - 011 -  8814743
e-mail : priteshj@nedcor.com


Attachment: nvbak.txt
Description: Text - character set unknown

<Prev in Thread] Current Thread [Next in Thread>

Archive operated by Skills 1st Ltd

See also: The NetView Web