Here's my list of frequently asked questions.

Q1. Compiling SHA on Solaris
Q2. Increasing the size of the entropy pool


Q1. Compiling the contained SHA module on Solaris sometimes fails. Common
errors include:

 /usr/ucb/cc:  language optional package not installed.

 cc -c   -xO3 -xdepend     -DVERSION=\"1.2\"  -DXS_VERSION=\"1.2\" -KPIC
 -I/usr/perl5/5.00503/sun4-solaris/CORE  SHA.c
 cc: unrecognized option `-KPIC'
 cc: language depend not recognized
 cc: SHA.c: linker input file unused since linking not done

A1:

This indicates a broken perl installation, one which does not know how to
compile modules. The best fix is to hassle your perl admin to install it
properly. It may be possible to hand-compile the module (using the correct
flags), but this is a nuisance.

When perl is built, the Configure script determines the name of the compiler
and the options it wants, and records them in a module called Config.pm .
Later, when you build a module, this saved information is used to create the
Makefile that will actually do the compile. Sun's "bundled" compiler wants
options like -xO3 and -KPIC, whereas gcc wants things like -O3 and -fPIC.

One problem with binary installations, especially on Solaris, is that the
compiler used to build perl may no longer be available to build modules. The
safest option is to install perl locally instead of using a binary-only
package. It's possible to find a solaris system that has perl installed
properly and copy the compiler name and flags out of Config.pm on that system,
then manually edit the generated Makefile, but it isn't as reliable as
building perl yourself.

Failing that, you'll want to look at the Solaris-cc flags perl is trying to
use (I think -KPIC is one of them, and there's some weird optimization flag
like -xO8 or something) and replace them with appropriate gcc equivalents.
Look at the solaris 'cc' manual to understand what the old flags are doing,
then look in the gcc manual to find replacements. Usually the -KPIC turns into
-fPIC, and you can probably drop the optimization flag.

In particular you may be able to change parts of the compile line from:

> cc -c   -xO3 -xdepend     -DVERSION=\"1.2\"  -DXS_VERSION=\"1.2\" -KPIC

to

 gcc -c -O3 -DVERSION=\"1.2\"  -DXS_VERSION=\"1.2\" -fPIC

Try changing the compile like that (manually copy, paste, edit, and run the cc
command), then re-running 'make' to finish it off.


Q2:

> The answer I wanted to find in the docs was:
> "How could I generate sufficient entropy for 2000 sequential SSL connections
> a day?"
> 
> Would increasing $POOLSIZE, $MAX_ENTROPY and reducing $REFRESH_TIME help me
> in my quest?

> Date: Mon, 22 Jan 2001 12:06:22 -0700
> From: Tom Orban
> 
> I got and installed the egd, to use with openSSH on a bunch of our HP
> systems.  On some of them though, we have enough usage to where we run
> out of entropy very quickly, and then the ssh's seem to hang until it
> can build up some more entropy again.  Looking at the script, I have a
> couple of ideas on how to increase this, but am not really familiar
> enough with the theory behind the algorithm to make anything other than
> a guess.  I'm thinking that I can either increase the poolsize, or
> increase max_entropy, but like I said, I'm not really sure which one is
> the way to go.  Can you enlighten me?

A2:

Raising poolsize and max_entropy won't really help.. the total entropy that
can be stored at one time is the minimum of these two values, but what you
need is a way to increase the *rate* of entropy generation. That means more
gatherers, running more frequently. You can change some of the parameters used
for the gatherers (bits per byte of output, minimum re-run time) if you think
that your system is busy/random enough to justify it. (running 'df' ten times
a second is unlikely to give you output any more random than running it once a
second. On idle systems, running it once a week may be more realistic).

So no, POOLSIZE and MAX_ENTROPY won't help, instead you need to add more
gatherers and raise the 'bits-per-byte' values for them. But be aware that
your system may not actually be random enough to create all that much
entropy.. the higher you raise those values, the less random the resulting
data is. That may not be a problem for your application, but it represents a
security limitation. If you've got a need for high-volume entropy, some kind
of hardware random-number device is probably more appropriate. Various
commercial ones are available which use noise diodes to create random bits
which are then fed into the computer over a serial port. It would be a simple
matter to cook up a script that copies the bits off the serial port and uses
the 'write' interface of EGD to feed them into the entropy pool.
<http://www.fourmilab.ch/hotbits/how.html> describes a hardware RNG that
measures atomic decay to get its entropy.

