[Ach] removed outdated info on Linux RNG / haveged
arw at cs.fau.de
Tue Jul 11 20:03:24 CEST 2017
On 2017-07-11T18:53, Alice Wonder <alice at librelamp.com> wrote:
> On 07/11/2017 09:00 AM, Aaron Zauner wrote:
> > > On 10 Jul 2017, at 10:35, Florian Stosse <florian.stosse at gmail.com> wrote:
> > >
> > > Further insights I posted on GitHub, I forward it there :
> > >
> > > Got an answer from Andre Seznec (credited as one of the main authors : https://www.irisa.fr/caps/projects/hipsor/contact.php)
> > >
> > > He replied that, in his opinion, the principles on which HAVEGE and the haveged daemon are built are still valid, and in fact are more efficient today given the microprocessors architectural evolution (more complex architectures and more non-predictable states usable to gather entropy).
> > Has the author taken a look at how CSPRNGs are implemented currently in Linux, FreeBSD, OpenBSD and Windows? I don't think HAVEGE's concept is still valid. We have high speed, high-security CSPRNGs now in every major operating system, without the need for additional user-land daemons that are prone to exploitation, user-error or bugs. Please correct me if I'm wrong. Where do you see the benefits of using HAVEGE over - say - Linux's `urandom` char device as implemented in Linux 4.x?
> > > He acknowledged that he did not touch the code for +/- 10 years, and I couldn't not reach the listed maintainer. On Debian, the latest maintainer upload was on november 2016.
> > With security critical code - at least for me - this is a clear no-go.
> Please just stop.
> Give an acedemically sound (as in published exploit or peer reviewed paper)
> demonstrating a flaw in haveged or just stop.
That would not matter, since even if haveged were flawed, the only thing
it could do (at least when using it to refill /dev/(u)random) would be
to screw up the entropy estimate of /dev/random, thus making it block
less often. Which isn't really too much of a problem. Refilling with
predictable entropy doesn't magically make the entropy pool go bad,
except if one assumes that the mixing function is reversible, which
would imply cryptographic hash functions or ciphers being predictable.
That would be a 'we are screwed anyways' scenario. Or if the mixing
function were bad, which is also synonymous with 'we are screwed'.
Which means that haveged doesn't make anything worse entropy-wise,
except reduce the blocking behaviour by /dev/random.
On the other hand, haveged also doesn't really make anything better, at
least Linux has used interrupt timings as a source of entropy for ages.
Haveged relies on L1 cache timings, which are roughly derivatives of
those interrupt timings plus some additional CPU nondeterminism. L1
misses occur more often than interrupts, however, the collection of
timings in haveged is necessarily more limited than in the kernel. So I
don't see the quality advantage of haveged's entropy gathering. Also, in
more primitive embedded systems that are the usual suspects for low
entropy at boot time, L1 cache timings are necessarily more predictable.
So, in the entropy aspect I think it doesn't hurt, but doesn't help
> Change for the sake of change is idiotic.
There is another thing besides entropy to consider here: The very
existence of 'yet another service that does stuff' is generally
considered a problem in systems administration and security. More code
is more attack surface and more stuff that can go wrong. So if that code
is also useless, one should prune it.
Theoretically, if one is given to paranoia, there is also another
argument against haveged, from the original HAVEGE paper:
"In practice, the security of the HAVEGE generator relies on both the
unfeasibility of reproducing its internal state, and on the continuous
and unmonitorable injection of new uncertainty in its internal state by
Both arguments also apply to the entropy gathering of the operating
system kernel. However, the OS kernel is in a far better protected
position to hide its internal state and gather events, compared to a
userspace process like haveged. Haveged is more vulnerable to such
(admittedly somewhat hypothetical) attacks.
Btw. is there any reference regarding those deadlocks that haveged
supposedly could produce according to
 Or maybe not that hypothetical, there are quite a few side-channel
attacks on RSA multiplications based on cache timings. So I would
consider it at least possible (especially compared to what we knew
about cache timings compared to 2003) that there might be a
problem... But I'm not sure, I'd have to think on it a bit longer.
More information about the Ach