Discussion:
Solaris 10 : Disk space issue
janaabdulaleem
2005-09-22 05:58:36 UTC
Permalink
Hi All,
My hardisk is 60GB
and i have used it completly (no partition ) to install solaris

Disk slicing options at install
===============================
approx = 30GB to /export/home
approx = 10GB tp /opt

rest of the slices were left to solaris install default

but i was surprised that Solaris system CRASHED, with some error
about disk space full ( FILE SYSTEM FULL )

from df -f i got the following
================================

kbytees used avail capacity mounted on

dev/dsk/c0d0s0 39252414 3833719 52441 99%


/usr/lib/libc/libc_hwcap1.so.1 99% /lib/libc/so/1


could someone tell me how can i reset this to optimal value

also DO I NEED TO DISABLE ANY LOGS??


after system crash i used fsck to fix some problems, so i can
start solaris now but still disk space shows meagre 53 MB

my downloads after instllation would be around 5GB max
where has all the disk space gone?

please advice how can i recover my disk space

thanks a ton,
Abdul













------------------------ Yahoo! Groups Sponsor --------------------~-->
Fair play? Video games influencing politics. Click and talk back!
http://us.click.yahoo.com/T8sf5C/tzNLAA/TtwFAA/CZFolB/TM
--------------------------------------------------------------------~->

Please check the Links page before posting:
http://groups.yahoo.com/group/solarisx86/links
Post message: ***@yahoogroups.com
UNSUBSCRIBE: solarisx86-***@yahoogroups.com
Phillip Bruce
2005-09-22 06:55:40 UTC
Permalink
Post by janaabdulaleem
Hi All,
My hardisk is 60GB
and i have used it completly (no partition ) to install solaris
Disk slicing options at install
===============================
approx = 30GB to /export/home
approx = 10GB tp /opt
rest of the slices were left to solaris install default
but i was surprised that Solaris system CRASHED, with some error
about disk space full ( FILE SYSTEM FULL )
from df -f i got the following
================================
kbytees used avail capacity mounted on
dev/dsk/c0d0s0 39252414 3833719 52441 99%
/usr/lib/libc/libc_hwcap1.so.1 99% /lib/libc/so/1
could someone tell me how can i reset this to optimal value
also DO I NEED TO DISABLE ANY LOGS??
after system crash i used fsck to fix some problems, so i can
start solaris now but still disk space shows meagre 53 MB
my downloads after instllation would be around 5GB max
where has all the disk space gone?
please advice how can i recover my disk space
thanks a ton,
Abdul
Abdul,

Here are common reasons why filesystems get full

1. large core files from applications that may have core dump.
use "find / -name core -print" to find those files then use
"file core" to see what application caused those dump.
Then when you figure out why they doing that then you can remove
those core files.

2. logs files are too big - /var/adm is where all logs of the systems
are kept most generally.
Look at the current ones and delete the older logs such as the
messages.* files.

3. Old crash dump - When the system crash if crash dump is enabled it
will store those in
/var/crash/name of server in that directory. Get rid of old ones
but keep your latest ones
until your sure that the crashes are not happening anymore.
otherwise get rid of those files.

Also, I'm betting you place the default files in / root directory.
It is better if you separate your
filesystems. Make sure that /, /usr and /var are separated
filesystems by that they should be
on separate partitions (slices). That way you can mange your space
better instead of putting
all in one slice.

If your still having problems do a df -k or df -h and send that
output. You can also use the
find commmand this way:

# find / -size 1000000c -print

That will find any file that is 1 MB or larger. Be careful not just
to remove any file ok as you
may need it but most often they are logs or some core files that
grown or was created to applications
failing for some reason.

I hope this helps.

Phillip



------------------------ Yahoo! Groups Sponsor --------------------~-->
Get Bzzzy! (real tools to help you find a job). Welcome to the Sweet Life.
http://us.click.yahoo.com/A77XvD/vlQLAA/TtwFAA/CZFolB/TM
--------------------------------------------------------------------~->

Please check the Links page before posting:
http://groups.yahoo.com/group/solarisx86/links
Post message: ***@yahoogroups.com
UNSUBSCRIBE: solarisx86-***@yahoogroups.com
Russell Aspinwall
2005-09-22 07:54:58 UTC
Permalink
Post by Phillip Bruce
Post by janaabdulaleem
Hi All,
My hardisk is 60GB
and i have used it completly (no partition ) to install solaris
Disk slicing options at install
===============================
approx = 30GB to /export/home
approx = 10GB tp /opt
rest of the slices were left to solaris install default
Would not the default install be better if only swap and / were
provided, and swap and root partitions were calculated as below

swap = (0.5GB + ( disk partition (GB) * 0.025)
/ parition size = (disk partition space (GB) - swap space)


eg

20GB disk

swap = 0.5GB + ( 20 * 0.025)
= 1GB swap

/ = 19GB


200GB disk

swap = 0.5GB + (200 * 0.025)
= 5.5GB swap

/ = 194.5GB
Post by Phillip Bruce
Post by janaabdulaleem
but i was surprised that Solaris system CRASHED, with some error
about disk space full ( FILE SYSTEM FULL )
from df -f i got the following
================================
kbytees used avail capacity mounted on
dev/dsk/c0d0s0 39252414 3833719 52441 99%
/usr/lib/libc/libc_hwcap1.so.1 99% /lib/libc/so/1
could someone tell me how can i reset this to optimal value
also DO I NEED TO DISABLE ANY LOGS??
after system crash i used fsck to fix some problems, so i can
start solaris now but still disk space shows meagre 53 MB
my downloads after instllation would be around 5GB max
where has all the disk space gone?
please advice how can i recover my disk space
thanks a ton,
Abdul
Abdul,
Here are common reasons why filesystems get full
1. large core files from applications that may have core dump.
use "find / -name core -print" to find those files then use
"file core" to see what application caused those dump.
Then when you figure out why they doing that then you can remove
those core files.
2. logs files are too big - /var/adm is where all logs of the systems
are kept most generally.
Look at the current ones and delete the older logs such as the
messages.* files.
3. Old crash dump - When the system crash if crash dump is enabled it
will store those in
/var/crash/name of server in that directory. Get rid of old ones
but keep your latest ones
until your sure that the crashes are not happening anymore.
otherwise get rid of those files.
Also, I'm betting you place the default files in / root directory.
It is better if you separate your
filesystems. Make sure that /, /usr and /var are separated
filesystems by that they should be
on separate partitions (slices). That way you can mange your space
better instead of putting
all in one slice.
If your still having problems do a df -k or df -h and send that
output. You can also use the
# find / -size 1000000c -print
That will find any file that is 1 MB or larger. Be careful not just
to remove any file ok as you
may need it but most often they are logs or some core files that
grown or was created to applications
failing for some reason.
I hope this helps.
Phillip
http://groups.yahoo.com/group/solarisx86/links
Yahoo! Groups Links
______________________________________________________________________
This email has been scanned by the MessageLabs Email Security System.
For more information please visit http://www.messagelabs.com/email
______________________________________________________________________
--
Regards

Russell

Email: russell dot aspinwall at flomerics dot co dot uk
Network and Systems Administrator Flomerics Ltd
Telephone: 020-8941-8810 x3116 81 Bridge Road
Facsimile: 020-8941-8730 Hampton Court
Surrey, KT8 9HH
United Kingdom



------------------------ Yahoo! Groups Sponsor --------------------~-->
Fair play? Video games influencing politics. Click and talk back!
http://us.click.yahoo.com/T8sf5C/tzNLAA/TtwFAA/CZFolB/TM
--------------------------------------------------------------------~->

Please check the Links page before posting:
http://groups.yahoo.com/group/solarisx86/links
Post message: ***@yahoogroups.com
UNSUBSCRIBE: solarisx86-***@yahoogroups.com
Phillip Bruce
2005-09-22 14:06:31 UTC
Permalink
Russell Aspinwall wrote:
<orignal email snip>
Post by Russell Aspinwall
Would not the default install be better if only swap and / were
provided, and swap and root partitions were calculated as below
swap = (0.5GB + ( disk partition (GB) * 0.025)
/ parition size = (disk partition space (GB) - swap space)
eg
20GB disk
swap = 0.5GB + ( 20 * 0.025)
= 1GB swap
/ = 19GB
200GB disk
swap = 0.5GB + (200 * 0.025)
= 5.5GB swap
/ = 194.5GB
</more of the orginal email snip>

Russell,

It actually depends on what you are doing with the system but
in reality /, /usr and /var are minual of what I would have regardless.
Even with large disks environments.

As for swap, The old defacto used to be and still is for the most part
that you sized swap to physical size of memory. If your running Databases
or other high intensive I/O apps you'll want to be at least 2 times the
physical size of swap.

But do keep in mind that Solaris uses swap much differently.
http://sunsolve.sun.com/search/document.do?assetkey=1-30-1434-1

The above link is base on much older technology but is example that has been
used and many still use this today.

Goto to here http://sunsolve.sun.com/search/document.do?assetkey=1-30-1434-1
That is a PDF written by the very authors who wrote Solaris Internals book.
I do recommend that book highly. Start on slide 75 that where is talks about
Swap space.

Also keep in mind that vendors may have their own swap space
requirements and
you may want to research what those are first and tune the OS to those
specifications
first and see if you need to go lower or higher.

You could use a swap file to gage this as well but don't recommend you
keep the
swap file permantely as it can affect performance. Swap file is still
over head to the
OS that you don't need. Once you gauge where swap can run at ok. Remove
it to
a raw partition that is available.

Also read this
http://blogs.sun.com/roller/page/rmc?anchor=the_vm_system_formally_known

I hope this helps.

Phillip




------------------------ Yahoo! Groups Sponsor --------------------~-->
Fair play? Video games influencing politics. Click and talk back!
http://us.click.yahoo.com/T8sf5C/tzNLAA/TtwFAA/CZFolB/TM
--------------------------------------------------------------------~->

Please check the Links page before posting:
http://groups.yahoo.com/group/solarisx86/links
Post message: ***@yahoogroups.com
UNSUBSCRIBE: solarisx86-***@yahoogroups.com
Jon LaBadie
2005-09-23 00:14:44 UTC
Permalink
Post by Phillip Bruce
As for swap, The old defacto used to be and still is for the most part
that you sized swap to physical size of memory. If your running Databases
or other high intensive I/O apps you'll want to be at least 2 times the
physical size of swap.
Seems to me that with swap space now used for /tmp that the old standard,
one or two time memory is way too small. (unless you have 32GB of memory :)
--
Jon H. LaBadie ***@jgcomp.com
JG Computing
4455 Province Line Road (609) 252-0159
Princeton, NJ 08540-4322 (609) 683-7220 (fax)


------------------------ Yahoo! Groups Sponsor --------------------~-->
Get Bzzzy! (real tools to help you find a job). Welcome to the Sweet Life.
http://us.click.yahoo.com/A77XvD/vlQLAA/TtwFAA/CZFolB/TM
--------------------------------------------------------------------~->

Please check the Links page before posting:
http://groups.yahoo.com/group/solarisx86/links
Post message: ***@yahoogroups.com
UNSUBSCRIBE: solarisx86-***@yahoogroups.com
Russell Aspinwall
2005-09-23 06:13:16 UTC
Permalink
Post by Phillip Bruce
<orignal email snip>
Post by Russell Aspinwall
Would not the default install be better if only swap and / were
provided, and swap and root partitions were calculated as below
swap = (0.5GB + ( disk partition (GB) * 0.025)
/ parition size = (disk partition space (GB) - swap space)
eg
20GB disk
swap = 0.5GB + ( 20 * 0.025)
= 1GB swap
/ = 19GB
200GB disk
swap = 0.5GB + (200 * 0.025)
= 5.5GB swap
/ = 194.5GB
</more of the orginal email snip>
Russell,
It actually depends on what you are doing with the system but
in reality /, /usr and /var are minual of what I would have regardless.
Even with large disks environments.
Hi Philip,

After twenty years working with SunOS I never accept the default but
customise for the applications that are going to be run.

My suggestion was for a default which gives new users to Solaris (or
Unix) a more sensible default based on the space allocated for their
Solaris installation. Attempting to minimise the problems experienced in
the early phases of the learning curve should hopefully encourage them
to retain their use of Solaris.

Solaris needs to reach out beyond the technically aware to the
technically challenged; that is were market growth exists. Capturing
users from other Unix/Linux variants does help Solaris long term.
--
Regards

Russell





------------------------ Yahoo! Groups Sponsor --------------------~-->
Get Bzzzy! (real tools to help you find a job). Welcome to the Sweet Life.
http://us.click.yahoo.com/A77XvD/vlQLAA/TtwFAA/CZFolB/TM
--------------------------------------------------------------------~->

Please check the Links page before posting:
http://groups.yahoo.com/group/solarisx86/links
Post message: ***@yahoogroups.com
UNSUBSCRIBE: solarisx86-***@yahoogroups.com
Phillip Bruce
2005-09-24 04:08:38 UTC
Permalink
Post by Russell Aspinwall
Post by Phillip Bruce
<orignal email snip>
Post by Russell Aspinwall
Would not the default install be better if only swap and / were
provided, and swap and root partitions were calculated as below
swap = (0.5GB + ( disk partition (GB) * 0.025)
/ parition size = (disk partition space (GB) - swap space)
eg
20GB disk
swap = 0.5GB + ( 20 * 0.025)
= 1GB swap
/ = 19GB
200GB disk
swap = 0.5GB + (200 * 0.025)
= 5.5GB swap
/ = 194.5GB
</more of the orginal email snip>
Russell,
It actually depends on what you are doing with the system but
in reality /, /usr and /var are minual of what I would have regardless.
Even with large disks environments.
Hi Philip,
After twenty years working with SunOS I never accept the default but
customise for the applications that are going to be run.
My suggestion was for a default which gives new users to Solaris (or
Unix) a more sensible default based on the space allocated for their
Solaris installation. Attempting to minimise the problems experienced in
the early phases of the learning curve should hopefully encourage them
to retain their use of Solaris.
Solaris needs to reach out beyond the technically aware to the
technically challenged; that is were market growth exists. Capturing
users from other Unix/Linux variants does help Solaris long term.
--
Regards
Russell
Russell,

Neither have I and I agree with your statment on that Sun does need to
reah out more for
those technically challenged users.

Phillip



------------------------ Yahoo! Groups Sponsor --------------------~-->
Most low income households are not online. Help bridge the digital divide today!
http://us.click.yahoo.com/cd_AJB/QnQLAA/TtwFAA/CZFolB/TM
--------------------------------------------------------------------~->

Please check the Links page before posting:
http://groups.yahoo.com/group/solarisx86/links
Post message: ***@yahoogroups.com
UNSUBSCRIBE: solarisx86-***@yahoogroups.com
Chris Albertson
2005-09-22 16:19:15 UTC
Permalink
Post by Russell Aspinwall
Would not the default install be better if only swap and / were
provided, and swap and root partitions were calculated as below
swap = (0.5GB + ( disk partition (GB) * 0.025)
/ parition size = (disk partition space (GB) - swap space)
A lot of people suggest that but then some of those have other disks
that they ue for /export/home ad so on.

My question is if you have one disk and just one "/" filesystem
how do you do a backup? You can't dump a mounted filesystem?
I think there is a way to create a snapshot of a UFS?? I solved
the problem by using mirrors but I'm talking here about a one
disk one FS system.

Chris Albertson
Home: 310-376-1029 ***@yahoo.com
Cell: 310-990-7550
Office: 310-336-5189 ***@aero.org
KG6OMK



__________________________________
Yahoo! Mail - PC Magazine Editors' Choice 2005
http://mail.yahoo.com


------------------------ Yahoo! Groups Sponsor --------------------~-->
Get Bzzzy! (real tools to help you find a job). Welcome to the Sweet Life.
http://us.click.yahoo.com/A77XvD/vlQLAA/TtwFAA/CZFolB/TM
--------------------------------------------------------------------~->

Please check the Links page before posting:
http://groups.yahoo.com/group/solarisx86/links
Post message: ***@yahoogroups.com
UNSUBSCRIBE: solarisx86-***@yahoogroups.com
Geoff Lane
2005-09-22 16:39:27 UTC
Permalink
Post by Chris Albertson
I think there is a way to create a snapshot of a UFS??
fssnap can use used to create a static "copy" of a UFS filesystem which can
then be dumped by ufsdump or other backup software.

(It's not a real copy, but copy-on-write blocks are created as needed so the
system runs normally but you get a point-in-time dump. It's extremely neat
:-)
--
Geoff Lane
If earth and we humans are really composed of material that
represents only 4% of the mass of the universe, can it be said
that we truely are "The Scum of the Universe"?


------------------------ Yahoo! Groups Sponsor --------------------~-->
Get Bzzzy! (real tools to help you find a job). Welcome to the Sweet Life.
http://us.click.yahoo.com/A77XvD/vlQLAA/TtwFAA/CZFolB/TM
--------------------------------------------------------------------~->

Please check the Links page before posting:
http://groups.yahoo.com/group/solarisx86/links
Post message: ***@yahoogroups.com
UNSUBSCRIBE: solarisx86-***@yahoogroups.com
Mike Gerdts
2005-09-22 23:35:49 UTC
Permalink
Post by Chris Albertson
My question is if you have one disk and just one "/" filesystem
how do you do a backup? You can't dump a mounted filesystem?
I think there is a way to create a snapshot of a UFS?? I solved
the problem by using mirrors but I'm talking here about a one
disk one FS system.
How many people take a system down to back up the root disk? Does this mean
that people that claim months or more of uptime have never backed up their
systems in all that time? Unlikely.

I have never seen anyone take a system down for regular backups. In reality,
people (with big budgets) use tools like Netbackup or Networker to back up
mounted file systems. So long as you are not in the middle of patching the
system, installing additional packages, running devfsadm, etc., everything
that really matters on the root file system is in a pretty static (not open
read-write) state.

Mike


[Non-text portions of this message have been removed]



------------------------ Yahoo! Groups Sponsor --------------------~-->
Get Bzzzy! (real tools to help you find a job). Welcome to the Sweet Life.
http://us.click.yahoo.com/A77XvD/vlQLAA/TtwFAA/CZFolB/TM
--------------------------------------------------------------------~->

Please check the Links page before posting:
http://groups.yahoo.com/group/solarisx86/links
Post message: ***@yahoogroups.com
UNSUBSCRIBE: solarisx86-***@yahoogroups.com
Russell Aspinwall
2005-09-23 08:53:53 UTC
Permalink
Post by Chris Albertson
Post by Russell Aspinwall
Would not the default install be better if only swap and / were
provided, and swap and root partitions were calculated as below
swap = (0.5GB + ( disk partition (GB) * 0.025)
/ parition size = (disk partition space (GB) - swap space)
A lot of people suggest that but then some of those have other disks
that they ue for /export/home ad so on.
My question is if you have one disk and just one "/" filesystem
how do you do a backup? You can't dump a mounted filesystem?
I think there is a way to create a snapshot of a UFS??
man fssnap , unfortunately you do need to enough disk space to create a
snapshot which you can backup.

I always use a multi disk approach, OS on boot disk and
applications/data on other disk(s), or at home ufsdump the contents of
boot disk to second disk as backup in single user mode (once a month)
and tar apps/data weekly.

ufsdump 0bf 126 /backup/ufsdump.root /

tar cvf /backup/appsdata.tar /opt /home
Post by Chris Albertson
I solved
the problem by using mirrors but I'm talking here about a one
disk one FS system.
Chris Albertson
Cell: 310-990-7550
KG6OMK
__________________________________
Yahoo! Mail - PC Magazine Editors' Choice 2005
http://mail.yahoo.com
http://groups.yahoo.com/group/solarisx86/links
Yahoo! Groups Links
______________________________________________________________________
This email has been scanned by the MessageLabs Email Security System.
For more information please visit http://www.messagelabs.com/email
______________________________________________________________________
--
Regards

Russell

Email: russell dot aspinwall at flomerics dot co dot uk
Network and Systems Administrator Flomerics Ltd
Telephone: 020-8941-8810 x3116 81 Bridge Road
Facsimile: 020-8941-8730 Hampton Court
Surrey, KT8 9HH
United Kingdom



------------------------ Yahoo! Groups Sponsor --------------------~-->
Fair play? Video games influencing politics. Click and talk back!
http://us.click.yahoo.com/T8sf5C/tzNLAA/TtwFAA/CZFolB/TM
--------------------------------------------------------------------~->

Please check the Links page before posting:
http://groups.yahoo.com/group/solarisx86/links
Post message: ***@yahoogroups.com
UNSUBSCRIBE: solarisx86-***@yahoogroups.com
Ian Collins
2005-09-23 20:59:52 UTC
Permalink
Post by Russell Aspinwall
Post by Chris Albertson
Post by Russell Aspinwall
Would not the default install be better if only swap and / were
provided, and swap and root partitions were calculated as below
swap = (0.5GB + ( disk partition (GB) * 0.025)
/ parition size = (disk partition space (GB) - swap space)
A lot of people suggest that but then some of those have other disks
that they ue for /export/home ad so on.
My question is if you have one disk and just one "/" filesystem
how do you do a backup? You can't dump a mounted filesystem?
I think there is a way to create a snapshot of a UFS??
man fssnap , unfortunately you do need to enough disk space to create a
snapshot which you can backup.
Not so, the snapshot is a sparse file.

for example:

bash-3.00# fssnap -o backing-store=/etc /export/home/
/dev/fssnap/0
bash-3.00# df -kl /
Filesystem kbytes used avail capacity Mounted on
/dev/dsk/c1t0d0s0 8266743 4765542 3418534 59% /
bash-3.00# ls -l /etc/sn
snapshot0 snmp/
bash-3.00# ls -l /etc/snapshot0
-rw------- 1 root other 117819626496 Sep 24 08:58 /etc/snapshot0

Ian


------------------------ Yahoo! Groups Sponsor --------------------~-->
Most low income households are not online. Help bridge the digital divide today!
http://us.click.yahoo.com/cd_AJB/QnQLAA/TtwFAA/CZFolB/TM
--------------------------------------------------------------------~->

Please check the Links page before posting:
http://groups.yahoo.com/group/solarisx86/links
Post message: ***@yahoogroups.com
UNSUBSCRIBE: solarisx86-***@yahoogroups.com
Ben Taylor
2005-09-23 12:19:46 UTC
Permalink
Post by Mike Gerdts
Post by Chris Albertson
My question is if you have one disk and just one "/" filesystem
how do you do a backup?
single user mode, dump to tape or a writable nfs hosts,
if you're a client of a network backup system. None of
the production environments I've ever worked on had only
just "/".
Post by Mike Gerdts
Post by Chris Albertson
You can't dump a mounted filesystem?
You can. there is just a possiblity modifications to
open files.
Post by Mike Gerdts
Post by Chris Albertson
I think there is a way to create a snapshot of a UFS?? I solved
the problem by using mirrors but I'm talking here about a one
disk one FS system.
How many people take a system down to back up the root disk?
Not many, unless there's some sort of disk corruption
problem they're trying to work around.
Post by Mike Gerdts
Does this mean
that people that claim months or more of uptime have never backed up their
systems in all that time? Unlikely.
I have never seen anyone take a system down for regular backups. In reality,
people (with big budgets) use tools like Netbackup or Networker to back up
mounted file systems. So long as you are not in the middle of patching the
system, installing additional packages, running devfsadm, etc., everything
that really matters on the root file system is in a pretty static (not open
read-write) state.
Though probably not recommended as a backup mechanism,
if you only had /, and a live-update partition, you could
run LU to backup your disk to another slice.

Ben



------------------------ Yahoo! Groups Sponsor --------------------~-->
Fair play? Video games influencing politics. Click and talk back!
http://us.click.yahoo.com/T8sf5C/tzNLAA/TtwFAA/CZFolB/TM
--------------------------------------------------------------------~->

Please check the Links page before posting:
http://groups.yahoo.com/group/solarisx86/links
Post message: ***@yahoogroups.com
UNSUBSCRIBE: solarisx86-***@yahoogroups.com
Mike Gerdts
2005-09-24 14:02:47 UTC
Permalink
Post by Ben Taylor
Though probably not recommended as a backup mechanism,
if you only had /, and a live-update partition, you could
run LU to backup your disk to another slice.
In the most common invocation of live upgrade, the alternate boot
envrionment is created using cpio to copy the OS data to the alternate boot
envrionment. From the standpoint of stability on disk, this is pretty
equivalent to the mechanisms used by products such as Netbackup. FWIW,
Netbackup creates its data stream with (a possibly modified) GNU tar.

I have heard of many favorable references to the use of flash archives (same
mechanisms used, possibly lots of shared code, definitely the same
development group) for disaster recovery purposes. In general, the context
of these discussions have been from Sun's Enterprise Services group or
various Fortune 100 companies.

I personally have found that the most reliable way to use live upgrade is to
bypass the cpio mechanism and duplicate boot environments using SVM. I then
use lucreate with the preserve option so as to not whack the mirror that I
create and then split off. The reason for this has nothing to do with files
that may be open for writing and has everything to do with the fact that
Sun's cpio does not handle sparse files properly. My environment has
gatherings of UID's around 100 million, 212 million, and 500 million. This
means that while /var/adm/lastlog is using only a few KB (way less than 1
MB), "ls -l" reports it to be about 16 GB. When live upgrade (or flarcreate)
uses cpio, the on-disk size of the file balloons to 16 GB, which tends to be
larger than the target file system.

Mike


[Non-text portions of this message have been removed]



------------------------ Yahoo! Groups Sponsor --------------------~-->
Get Bzzzy! (real tools to help you find a job). Welcome to the Sweet Life.
http://us.click.yahoo.com/A77XvD/vlQLAA/TtwFAA/CZFolB/TM
--------------------------------------------------------------------~->

Please check the Links page before posting:
http://groups.yahoo.com/group/solarisx86/links
Post message: ***@yahoogroups.com
UNSUBSCRIBE: solarisx86-***@yahoogroups.com
Phillip Bruce
2005-09-24 16:48:15 UTC
Permalink
Post by Mike Gerdts
The reason for this has nothing to do with files
that may be open for writing and has everything to do with the fact that
Sun's cpio does not handle sparse files properly.
Mike,

While you are right about the OS side of it there are applications like
Oracle
and others that do have openfiles. If you do have that case within any
filesystem
that your backing up then you have to worry about those openfiles otherwise
you'll have corruption. Those are generally controlled by some file locking
mechanism.

The easiest thing to do is either buy Veritas Netbackup, Legato Network, or
other backup software maybe like St. Benard that can handle openfiles or
the easiest I've find to do is write shell script that can be invoked by
my backup
software that will stop that application and restart it long enough to
get it up
and running and still have my applications back up.

Otherwise as you or someone else had mention is to use snapshot or PIT
(Point in Time)
copy to to a separate mirror then split the mirror off and backup from
the mirror is
better. The reason for that is so you can continue your application
operations without
affecting your application and not risk having downtime. That helps in
bringing your
backup window down to a smaller time frame.

Phillip


------------------------ Yahoo! Groups Sponsor --------------------~-->
Get Bzzzy! (real tools to help you find a job). Welcome to the Sweet Life.
http://us.click.yahoo.com/A77XvD/vlQLAA/TtwFAA/CZFolB/TM
--------------------------------------------------------------------~->

Please check the Links page before posting:
http://groups.yahoo.com/group/solarisx86/links
Post message: ***@yahoogroups.com
UNSUBSCRIBE: solarisx86-***@yahoogroups.com
Mike Gerdts
2005-09-24 19:12:42 UTC
Permalink
Post by Phillip Bruce
Mike,
While you are right about the OS side of it there are applications like
Oracle
and others that do have openfiles. If you do have that case within any
filesystem
that your backing up then you have to worry about those openfiles otherwise
you'll have corruption. Those are generally controlled by some file locking
mechanism.
I'm with you 100% on that - applications (such as databases) need to either
use application specific backup mechanisms or be in a shut-down or otherwise
read-only state.

Earlier in the thread people seemed to be arguing that lots of partitions
for /, /var, /usr, /opt, ... somehow made backups easier. In the context of
backing up the OS, there are very few times when a purely OS disk has enough
activity on it to make any inconsistencies on it worth worrying about.
However, if /var happens to hold the mail spool for a few thousand users,
you may want to be a bit more careful than if the most active thing in /var
is a few entries per hour to /var/adm/messages.

Mike


[Non-text portions of this message have been removed]



------------------------ Yahoo! Groups Sponsor --------------------~-->
Fair play? Video games influencing politics. Click and talk back!
http://us.click.yahoo.com/T8sf5C/tzNLAA/TtwFAA/CZFolB/TM
--------------------------------------------------------------------~->

Please check the Links page before posting:
http://groups.yahoo.com/group/solarisx86/links
Post message: ***@yahoogroups.com
UNSUBSCRIBE: solarisx86-***@yahoogroups.com
Phillip Bruce
2005-09-25 15:01:24 UTC
Permalink
<snip text>
Earlier in the thread people seemed to be arguing that lots of partitions
for /, /var, /usr, /opt, ... somehow made backups easier.
</snip text>
Mike
Mike,

Having separate filesystem make it easier to manage especially when
you have NON critical filesystems
that don't need to have the OS shutdown to repair. Also if you don't
have logging disk fsck won't take
as long to run during the boot up either. I'll still see too many people
put everything under / just because
they think it is easier to backup a single filesytem than it is multiple
filesystems.

The disadvantage of having everything under root is that if you have a
corrupted / you'll end up just having
to shutdown the OS to repair it. You only need to do that if / and /usr
has those types of problems the others
you can leave the OS up and running as long you don't have an
application that is using that filesystem. Then it
just a matter of shutting down that app and repairing that filesystem.

Also too many times I've seen admins that centralize their logs files
under /var/adm directory have /var filesystem that is under /
filesytem and wonder why / filesystems get full and keeps them wondering
why the OS goes to a crawl state and crashes
because of it.

PLANNING!!! I can't stress how important it is to plan your Operating
System filesystem layout. That benefits your
particular needs. If you can manage separate filesystems then you much
better off and have far less down time than
you will if you have just a single filesystem with everythign on it.

Phillip




------------------------ Yahoo! Groups Sponsor --------------------~-->
Get Bzzzy! (real tools to help you find a job). Welcome to the Sweet Life.
http://us.click.yahoo.com/A77XvD/vlQLAA/TtwFAA/CZFolB/TM
--------------------------------------------------------------------~->

Please check the Links page before posting:
http://groups.yahoo.com/group/solarisx86/links
Post message: ***@yahoogroups.com
UNSUBSCRIBE: solarisx86-***@yahoogroups.com
Mike Gerdts
2005-09-25 16:46:40 UTC
Permalink
Post by Phillip Bruce
<snip text>
Earlier in the thread people seemed to be arguing that lots of
partitions
for /, /var, /usr, /opt, ... somehow made backups easier.
</snip text>
Mike
Mike,
Having separate filesystem make it easier to manage especially when
you have NON critical filesystems
that don't need to have the OS shutdown to repair. Also if you don't
have logging disk fsck won't take
as long to run during the boot up either. I'll still see too many people
put everything under / just because
they think it is easier to backup a single filesytem than it is multiple
filesystems.
I am a firm believer in the OS belongs in /. Applications (e.g. the reason
that you have the OS) belongs elsewhere. Application logs should not write
to / or to /var. The only directory that should be writable to non-sysadmin
users is /var/tmp. On a system where that is subject to abuse, /var/tmp
should be considered for a separate file system or an aggressive tmp cleaner
should be used. /tmp should be tmpfs and should be limited to a smallish
portion of the amount of physical RAM on themachine. Applications (oracle on
a database server, mail spool and mail queue on a mail server, ~ftp on an
anonymous FTP server, logs for centralized syslog server, etc.) belong on a
file system separate from any OS data.

I can honestly say I have never seen downtime on a Solaris box that was a
result of someone filling up the root file system when / and /var were on
the same file system that could have been prevented if / and /var were on
different file systems. I've heard many people say that filling up a file
system can lead to file system corruption. There are most definitely bugs in
the past where this could happen. There are also bugs, however, that the use
of rm or ln at just the wrong time would cause file system corruption. More
likely, however, people are lumping file truncation due to out of block
situations (file system data loss) as file system corruption. If you were to
talk to a file system developer, I would bet they would only classify
conditions of file system metadata loss as file system corruption.

I have an aggregate 1000+ desktops and 800+ servers (SPARC 1 - 25k) in my
experience. I have seen significant amounts of planned downtime because
someone thought that dicing up the OS disks into lots of partitions would be
a good idea. An outage is then planned to perform the rather complicated
process of repartitioning OS disks.

Even with good planning, something often happens to the purpose of the
machine or some other influence causes bloat in one of the file systems. As
an example, consider a typical machine from Sun's line-up when Solaris 8 was
new. Who would think that on that system with 4 - 18 GB disks that you need
to have a huge /var to deal with /var/sadm that grows out of control because
Sun has a monstrocity called the "kernel and apache" patch that grows to
well over 50 MB (uncompressed) and gets revved on an average of twice per
month, starting 2 years after the system was installed? Hope you pick a good
patch to go to, because either you can't save backout data or you will be
patching again (with another 50 MB of backout data) real soon.


The disadvantage of having everything under root is that if you have a
Post by Phillip Bruce
corrupted / you'll end up just having
to shutdown the OS to repair it. You only need to do that if / and /usr
has those types of problems the others
you can leave the OS up and running as long you don't have an
application that is using that filesystem. Then it
just a matter of shutting down that app and repairing that filesystem.
Perhaps you are doing something than I am, but I can't think of too many
times where I ran into a corrupted OS file system when I thought that the
best course of action was to try to fix it with the production workload
running. Frankly, if I am running into one file system that is corrupt, this
implies that there may be something going on in kernel data structures or
logic that say that I should probably do a reboot fsck all file systems of
that type, and go looking at the most recent patches for the kernel, that
file system type, relevant block devices, SCSI controllers, etc. to see if
any of the bugs fixed look like what I just saw.

Also too many times I've seen admins that centralize their logs files
Post by Phillip Bruce
under /var/adm directory have /var filesystem that is under /
filesytem and wonder why / filesystems get full and keeps them wondering
why the OS goes to a crawl state and crashes
because of it.
If you have a lot of activity on the partition(s) that your OS is on, you
are patching, running backups, or doing something wrong. That is, even if
someone puts something huge in /var/tmp and fills up /, the overall system
performance should not be impacted - only those things that need to allocate
new blocks in the same partition as /var/tmp may be impacted because the OS
is unable to allocate contiguous blocks or the blocks are distant from the
relevant inode(s).

PLANNING!!! I can't stress how important it is to plan your Operating
Post by Phillip Bruce
System filesystem layout. That benefits your
particular needs. If you can manage separate filesystems then you much
better off and have far less down time than
you will if you have just a single filesystem with everythign on it.
Agreed. Plan to keep strict separation between the OS and the reason you are
running an OS. It will make it easier to know who to page when things go
wrong (sysadmin vs. dba), it will make it so that you can have cookie-cutter
images so that you may have no reason to back up the OS (it looks just like
the flar...), and it will make it much easier when it comes time to migrate
applications to a new system, upgrade/reinstall the OS, use tools like live
upgrade, etc.

Mike


[Non-text portions of this message have been removed]



------------------------ Yahoo! Groups Sponsor --------------------~-->
Fair play? Video games influencing politics. Click and talk back!
http://us.click.yahoo.com/T8sf5C/tzNLAA/TtwFAA/CZFolB/TM
--------------------------------------------------------------------~->

Please check the Links page before posting:
http://groups.yahoo.com/group/solarisx86/links
Post message: ***@yahoogroups.com
UNSUBSCRIBE: solarisx86-***@yahoogroups.com
Ben Taylor
2005-09-23 12:29:31 UTC
Permalink
Post by Jon LaBadie
Post by Phillip Bruce
As for swap, The old defacto used to be and still is for the most part
that you sized swap to physical size of memory. If your running Databases
or other high intensive I/O apps you'll want to be at least 2 times the
physical size of swap.
Seems to me that with swap space now used for /tmp that the old standard,
one or two time memory is way too small. (unless you have 32GB of memory :)
One of the solaris installation guides has a staggard
chart for the amount of swap based on the amount of
memory. The 2X value is only really useful on small
systems with a small memory footprint. IIRC, on really
big systems (16-272GB), the recommendation was 35% of
memory for swap. Obviously, this is a ball park figure
and doesn't adequately cover all systems. In those cases,
understanding how your applications use memory (like
shared memory for oracle), and limiting the amount of
memory that can be used for /tmp, can help you more
appropriately size your swap requirements correctly.



------------------------ Yahoo! Groups Sponsor --------------------~-->
Get Bzzzy! (real tools to help you find a job). Welcome to the Sweet Life.
http://us.click.yahoo.com/A77XvD/vlQLAA/TtwFAA/CZFolB/TM
--------------------------------------------------------------------~->

Please check the Links page before posting:
http://groups.yahoo.com/group/solarisx86/links
Post message: ***@yahoogroups.com
UNSUBSCRIBE: solarisx86-***@yahoogroups.com
John D Groenveld
2005-09-23 14:53:56 UTC
Permalink
Post by Ben Taylor
One of the solaris installation guides has a staggard
chart for the amount of swap based on the amount of
memory. The 2X value is only really useful on small
systems with a small memory footprint. IIRC, on really
big systems (16-272GB), the recommendation was 35% of
memory for swap. Obviously, this is a ball park figure
and doesn't adequately cover all systems. In those cases,
understanding how your applications use memory (like
shared memory for oracle), and limiting the amount of
memory that can be used for /tmp, can help you more
appropriately size your swap requirements correctly.
Does Anil Gadre and company's marketing wonks or someone else here
who follows the stationary and mobile workstation markets, have trend
data which can be used to project when Acer will release a Ferrari with
16GB ram?

I ask, because I recently noticed that Crucial says the W2100z will
support 32GB ram.

Are the current defaults set in stone for the S10 updates?

Is there an OpenSolaris or other dialog going on somewhere to discuss
how the suninstall default layout might be enhanced to support the
realities of large memory and disk configurations and Solaris features
like Live Update?

John
***@acm.org



------------------------ Yahoo! Groups Sponsor --------------------~-->
Fair play? Video games influencing politics. Click and talk back!
http://us.click.yahoo.com/T8sf5C/tzNLAA/TtwFAA/CZFolB/TM
--------------------------------------------------------------------~->

Please check the Links page before posting:
http://groups.yahoo.com/group/solarisx86/links
Post message: ***@yahoogroups.com
UNSUBSCRIBE: solarisx86-***@yahoogroups.com
Al Hopper
2005-09-23 22:45:24 UTC
Permalink
Post by John D Groenveld
Post by Ben Taylor
One of the solaris installation guides has a staggard
chart for the amount of swap based on the amount of
memory. The 2X value is only really useful on small
systems with a small memory footprint. IIRC, on really
big systems (16-272GB), the recommendation was 35% of
memory for swap. Obviously, this is a ball park figure
and doesn't adequately cover all systems. In those cases,
understanding how your applications use memory (like
shared memory for oracle), and limiting the amount of
memory that can be used for /tmp, can help you more
appropriately size your swap requirements correctly.
Does Anil Gadre and company's marketing wonks or someone else here
who follows the stationary and mobile workstation markets, have trend
data which can be used to project when Acer will release a Ferrari with
16GB ram?
I ask, because I recently noticed that Crucial says the W2100z will
support 32GB ram.
Are the current defaults set in stone for the S10 updates?
Is there an OpenSolaris or other dialog going on somewhere to discuss
how the suninstall default layout might be enhanced to support the
realities of large memory and disk configurations and Solaris features
like Live Update?
There is a group being setup on OpenSolaris which deals with
"Approachability" issues. You may well ask what Approachability really
means. And its hard to provide an exact definition. But, a general
description would be that it is tasked with identifying/prioritizing and
"fixing" any issues which affect the usability and Approachability of
Solaris. Another way of describing it, perhaps less complimentary to
Sun in general, would be that it handles useability issues within
(Open)Solaris that have long been identified as a royal pain to deal with,
but have (usually) fallen outside a particular groups mandate.

Default install configs would fit under useability/approachability and
generally touch on many different areas of the code - so no one group of
developers have been able, up to now, to make the required changes, or get
the required committment/budget to "make it happen".

This is a big step forward for (Open)Solaris IMHO. So if you want *your*
bugs fixed, you'll have to get involved ... and maybe even do some of the
work!

Regards,

Al Hopper Logical Approach Inc, Plano, TX. ***@logical-approach.com
Voice: 972.379.2133 Fax: 972.379.2134 Timezone: US CDT
OpenSolaris.Org Community Advisory Board (CAB) Member - Apr 2005


------------------------ Yahoo! Groups Sponsor --------------------~-->
Get Bzzzy! (real tools to help you find a job). Welcome to the Sweet Life.
http://us.click.yahoo.com/A77XvD/vlQLAA/TtwFAA/CZFolB/TM
--------------------------------------------------------------------~->

Please check the Links page before posting:
http://groups.yahoo.com/group/solarisx86/links
Post message: ***@yahoogroups.com
UNSUBSCRIBE: solarisx86-***@yahoogroups.com
John D Groenveld
2005-09-25 23:29:14 UTC
Permalink
Post by Al Hopper
Default install configs would fit under useability/approachability and
generally touch on many different areas of the code - so no one group of
developers have been able, up to now, to make the required changes, or get
the required committment/budget to "make it happen".
Might the groups responsible for the various ISV starter kits (eg Oracle,
RealNetworks, JES (SunONE(iPlanet)), etc) be required to fund the
functionality for suninstall to add the LiveUpdate slices and calculate
any necessary swap for mobile or stationary developer workstations?
Post by Al Hopper
This is a big step forward for (Open)Solaris IMHO. So if you want *your*
bugs fixed, you'll have to get involved ... and maybe even do some of the
work!
I'm no MBA, but I don't see a viable business plan around usuable/
approachable OpenSolaris distributions that lack Sun's Solaris trademark.

I suspect the MadHatters faced the same problem when they explored
putting up a SunLinux distribution against established Redhat and Suse.

I know that was my take-away from the discussions following the January
8th 2002 debacle about possibly forming a company to purchase the rights
to support Solaris x86.

Ode to dot-com days of free money and free engineer/hours.

John
***@acm.org



------------------------ Yahoo! Groups Sponsor --------------------~-->
Fair play? Video games influencing politics. Click and talk back!
http://us.click.yahoo.com/T8sf5C/tzNLAA/TtwFAA/CZFolB/TM
--------------------------------------------------------------------~->

Please check the Links page before posting:
http://groups.yahoo.com/group/solarisx86/links
Post message: ***@yahoogroups.com
UNSUBSCRIBE: solarisx86-***@yahoogroups.com

Ben Taylor
2005-09-23 18:48:51 UTC
Permalink
Post by John D Groenveld
Post by Ben Taylor
One of the solaris installation guides has a staggard
chart for the amount of swap based on the amount of
memory. The 2X value is only really useful on small
systems with a small memory footprint. IIRC, on really
big systems (16-272GB), the recommendation was 35% of
memory for swap. Obviously, this is a ball park figure
and doesn't adequately cover all systems. In those cases,
understanding how your applications use memory (like
shared memory for oracle), and limiting the amount of
memory that can be used for /tmp, can help you more
appropriately size your swap requirements correctly.
Does Anil Gadre and company's marketing wonks or someone else here
who follows the stationary and mobile workstation markets, have trend
data which can be used to project when Acer will release a Ferrari with
16GB ram?
For the most part, any workstation really should only have
enough swap to handle dumping a core, handling tmp file
space and so forth. Does it really make sense to have
16G of swap on a laptop? I can't see the point. swap and
tmpfs are very transient, and as has always been said about
Solaris - If you are paging like crazy, you either add
memory to the system, move the app, or figure out how to
reduce the paging (application optimizations).
Post by John D Groenveld
I ask, because I recently noticed that Crucial says the W2100z will
support 32GB ram.
Solaris is very optimized to use the memory first, and only
page out LRU pages (IIRC). Other than file system buffering,
my experience is that applications that have poor memory
management (like mozilla) end up with pages out on the
swap device. If you've ever loaded up /tmp near full,
you can see the degredation in performance as the lack
of free pages for the ufs buffer becomes apparent.
Post by John D Groenveld
Are the current defaults set in stone for the S10 updates?
I'd have to guess that the current defaults for S10U1
are setting concrete right now. Whether or not they
can be changed, or need to be changed depends on whether
or not there's bug or RFE logged against it, and if it
becomes a priority to fix it. As I'm not in engineering,
I won't speak for them.

I've always wondered if the list shouldn't have some sort
of tutorial (maybe a wiki) on installing solaris. Some
kind of tree structure so that folks could understand why
you might want to use a single / config, something with
more flexibility like a multiple instance LU config, or
something more traditional (but less so appropriate for
Solaris these days unless you're doing real tight
security or driver development) such as a split / and /usr
config.
Post by John D Groenveld
Is there an OpenSolaris or other dialog going on somewhere to discuss
how the suninstall default layout might be enhanced to support the
realities of large memory and disk configurations and Solaris features
like Live Update?
No idea. I'm even sure that the suninstall stuff is in
open solaris yet, so the point may be mute.

Again, large memory and disk configuration really should
rely on the requirements of the system, and expertise of
the installer, rather than trying to jam a round hole into
a square peg from a set of defaults (though if better
defaults are required, perhaps those can be improved).

Ben



------------------------ Yahoo! Groups Sponsor --------------------~-->
Most low income households are not online. Help bridge the digital divide today!
http://us.click.yahoo.com/cd_AJB/QnQLAA/TtwFAA/CZFolB/TM
--------------------------------------------------------------------~->

Please check the Links page before posting:
http://groups.yahoo.com/group/solarisx86/links
Post message: ***@yahoogroups.com
UNSUBSCRIBE: solarisx86-***@yahoogroups.com
Loading...