An Oracle employee has warned that the analytics features of its ZFS storage appliances can result in “unresponsive” systems.
The post linked to above opens with Oracle staffer Matt Barnson stating “I've received a number of questions about analytics and the problems they cause for the Oracle ZFS Storage Appliance.”
More ReadingOracle gets ZFS filer array spun up to near-AFA speedsSyneto: Behold, blockheads – an all-flash array... based on ZFSThe firm that swallowed the Sun: Is Oracle happy as Larry with hardware and systems?Dedupe, dedupe... dedupe, dedupe, dedupe: Oracle polishes ZFS diamondOracle refreshes RAM-crammed ZFS array line
There's nothing wrong with Oracle's analytics – Barnson reckons they're a great reason to consider a ZFS appliance – but it appears owners of the appliances may be abusing them, despite the presence of an analytics guide that warns "analytics statistic collection comes at some cost to overall performance."
It seems, however, that not all users of the appliance are reading the instructions as Barnson wrote "From takeover times to unresponsive systems, it's possible for a customer to cause their appliance numerous challenges by invoking analytics without a care for the system cost."
He then went on to explain the wrong way to run analytics on the ZFS appliance, offering a list of worst practices one ought to avoid if one prefers a working storage array to one that spends non-useful amounts of time navel-gazing.
The worst thing one can do to a ZFS appliance, he wrote, is “any breakdown at all involving L2ARC. Use them, but turn them off when you're done. There are some challenges with zpool import (circa 2015) that because L2ARC is restricted to a specific head results in long pool takeover times if L2ARC-related analytics are enabled when a takeover is performed.”
Barnson says more than a few users are having trouble with ZFS analytics, so he wrote this post as he feels the analytics guide may be a little dense. ®