linux/drivers/md/bcache
dongdong tao 71dda2a562 bcache: consider the fragmentation when update the writeback rate
Current way to calculate the writeback rate only considered the
dirty sectors, this usually works fine when the fragmentation
is not high, but it will give us unreasonable small rate when
we are under a situation that very few dirty sectors consumed
a lot dirty buckets. In some case, the dirty bucekts can reached
to CUTOFF_WRITEBACK_SYNC while the dirty data(sectors) not even
reached the writeback_percent, the writeback rate will still
be the minimum value (4k), thus it will cause all the writes to be
stucked in a non-writeback mode because of the slow writeback.

We accelerate the rate in 3 stages with different aggressiveness,
the first stage starts when dirty buckets percent reach above
BCH_WRITEBACK_FRAGMENT_THRESHOLD_LOW (50), the second is
BCH_WRITEBACK_FRAGMENT_THRESHOLD_MID (57), the third is
BCH_WRITEBACK_FRAGMENT_THRESHOLD_HIGH (64). By default
the first stage tries to writeback the amount of dirty data
in one bucket (on average) in (1 / (dirty_buckets_percent - 50)) second,
the second stage tries to writeback the amount of dirty data in one bucket
in (1 / (dirty_buckets_percent - 57)) * 100 millisecond, the third
stage tries to writeback the amount of dirty data in one bucket in
(1 / (dirty_buckets_percent - 64)) millisecond.

the initial rate at each stage can be controlled by 3 configurable
parameters writeback_rate_fp_term_{low|mid|high}, they are by default
1, 10, 1000, the hint of IO throughput that these values are trying
to achieve is described by above paragraph, the reason that
I choose those value as default is based on the testing and the
production data, below is some details:

A. When it comes to the low stage, there is still a bit far from the 70
   threshold, so we only want to give it a little bit push by setting the
   term to 1, it means the initial rate will be 170 if the fragment is 6,
   it is calculated by bucket_size/fragment, this rate is very small,
   but still much reasonable than the minimum 8.
   For a production bcache with unheavy workload, if the cache device
   is bigger than 1 TB, it may take hours to consume 1% buckets,
   so it is very possible to reclaim enough dirty buckets in this stage,
   thus to avoid entering the next stage.

B. If the dirty buckets ratio didn't turn around during the first stage,
   it comes to the mid stage, then it is necessary for mid stage
   to be more aggressive than low stage, so i choose the initial rate
   to be 10 times more than low stage, that means 1700 as the initial
   rate if the fragment is 6. This is some normal rate
   we usually see for a normal workload when writeback happens
   because of writeback_percent.

C. If the dirty buckets ratio didn't turn around during the low and mid
   stages, it comes to the third stage, and it is the last chance that
   we can turn around to avoid the horrible cutoff writeback sync issue,
   then we choose 100 times more aggressive than the mid stage, that
   means 170000 as the initial rate if the fragment is 6. This is also
   inferred from a production bcache, I've got one week's writeback rate
   data from a production bcache which has quite heavy workloads,
   again, the writeback is triggered by the writeback percent,
   the highest rate area is around 100000 to 240000, so I believe this
   kind aggressiveness at this stage is reasonable for production.
   And it should be mostly enough because the hint is trying to reclaim
   1000 bucket per second, and from that heavy production env,
   it is consuming 50 bucket per second on average in one week's data.

Option writeback_consider_fragment is to control whether we want
this feature to be on or off, it's on by default.

Lastly, below is the performance data for all the testing result,
including the data from production env:
https://docs.google.com/document/d/1AmbIEa_2MhB9bqhC3rfga9tp7n9YX9PLn0jSUxscVW0/edit?usp=sharing

Signed-off-by: dongdong tao <dongdong.tao@canonical.com>
Signed-off-by: Coly Li <colyli@suse.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-02-10 08:05:59 -07:00
..
alloc.c bcache: remove embedded struct cache_sb from struct cache_set 2020-10-02 14:25:30 -06:00
bcache.h bcache: consider the fragmentation when update the writeback rate 2021-02-10 08:05:59 -07:00
bset.c bcache: allocate meta data pages as compound pages 2020-07-25 07:38:19 -06:00
bset.h bcache: explicity type cast in bset_bkey_last() 2020-02-01 07:55:39 -07:00
btree.c bcache: remove embedded struct cache_sb from struct cache_set 2020-10-02 14:25:30 -06:00
btree.h bcache: remove embedded struct cache_sb from struct cache_set 2020-10-02 14:25:30 -06:00
closure.c bcache: Convert to DEFINE_SHOW_ATTRIBUTE 2020-10-02 14:25:29 -06:00
closure.h bcache: fix typo in code comments of closure_return_with_destructor() 2018-10-08 08:19:43 -06:00
debug.c bcache: use bio_set_dev to assign ->bi_bdev 2021-01-26 08:50:01 -07:00
debug.h bcache: add identifier names to arguments of function definitions 2018-08-11 15:46:41 -06:00
extents.c bcache: remove embedded struct cache_sb from struct cache_set 2020-10-02 14:25:30 -06:00
extents.h bcache: add identifier names to arguments of function definitions 2018-08-11 15:46:41 -06:00
features.c bcache: introduce BCH_FEATURE_INCOMPAT_LOG_LARGE_BUCKET_SIZE for large bucket 2021-01-09 09:21:03 -07:00
features.h bcache: introduce BCH_FEATURE_INCOMPAT_LOG_LARGE_BUCKET_SIZE for large bucket 2021-01-09 09:21:03 -07:00
io.c bcache: remove embedded struct cache_sb from struct cache_set 2020-10-02 14:25:30 -06:00
journal.c bcache: remove embedded struct cache_sb from struct cache_set 2020-10-02 14:25:30 -06:00
journal.h Revert "bcache: fix fifo index swapping condition in journal_pin_cmp()" 2019-11-18 08:35:47 -07:00
Kconfig bcache: Fix typo in Kconfig name 2020-07-25 07:38:19 -06:00
Makefile bcache: add sysfs file to display feature sets information of cache set 2020-07-25 07:38:21 -06:00
movinggc.c bcache: remove for_each_cache() 2020-10-02 14:25:29 -06:00
request.c block: use ->bi_bdev for bio based I/O accounting 2021-01-24 18:17:20 -07:00
request.h block: move ->make_request_fn to struct block_device_operations 2020-07-01 07:27:24 -06:00
stats.c bcache: fix memory corruption in bch_cache_accounting_clear() 2020-02-01 07:55:39 -07:00
stats.h bcache: add identifier names to arguments of function definitions 2018-08-11 15:46:41 -06:00
super.c bcache: don't pass BIOSET_NEED_BVECS for the 'bio_set' embedded in 'cache_set' 2021-01-24 21:24:09 -07:00
sysfs.c bcache: consider the fragmentation when update the writeback rate 2021-02-10 08:05:59 -07:00
sysfs.h bcache: add sysfs_strtoul_bool() for setting bit-field variables 2019-02-09 07:18:32 -07:00
trace.c License cleanup: add SPDX GPL-2.0 license identifier to files with no license 2017-11-02 11:10:55 +01:00
util.c treewide: Use fallthrough pseudo-keyword 2020-08-23 17:36:59 -05:00
util.h bcache: Revert "bcache: fix high CPU occupancy during journal" 2019-06-28 07:39:17 -06:00
writeback.c bcache: consider the fragmentation when update the writeback rate 2021-02-10 08:05:59 -07:00
writeback.h bcache: consider the fragmentation when update the writeback rate 2021-02-10 08:05:59 -07:00