Message ID | 20211214031553.16435-1-yajun.deng@linux.dev |
---|---|
State | New |
Headers | show |
Series | lib/raid6: fix abnormally high latency | expand |
On Mon, Dec 13, 2021 at 7:17 PM Yajun Deng <yajun.deng@linux.dev> wrote: > > We found an abnormally high latency when executing modprobe raid6_pq, the > latency is greater than 1.2s when CONFIG_PREEMPT_VOLUNTARY=y, greater than > 67ms when CONFIG_PREEMPT=y, and greater than 16ms when CONFIG_PREEMPT_RT=y. > This is caused by disable the preemption, this time is too long and > unreasonable. We just need to disable migration. so used migrate_disable()/ > migrate_enable() instead of preempt_disable()/preempt_enable(). This is > beneficial for CONFIG_PREEMPT=y or CONFIG_PREEMPT_RT=y, but no effect for > CONFIG_PREEMPT_VOLUNTARY=y. > > Fixes: fe5cbc6e06c7 ("md/raid6 algorithms: delta syndrome functions") > Signed-off-by: Yajun Deng <yajun.deng@linux.dev> We measure the speed of different RAID algorithms.If we don't disable preempt, the result may be inaccurate, right? IIUC, we only disable preempt for 16 jiffies. Why do we see 1.2 second delay? Thanks, Song > --- > lib/raid6/algos.c | 8 ++++---- > 1 file changed, 4 insertions(+), 4 deletions(-) > > diff --git a/lib/raid6/algos.c b/lib/raid6/algos.c > index 6d5e5000fdd7..21611d05c34c 100644 > --- a/lib/raid6/algos.c > +++ b/lib/raid6/algos.c > @@ -162,7 +162,7 @@ static inline const struct raid6_calls *raid6_choose_gen( > > perf = 0; > > - preempt_disable(); > + migrate_disable(); > j0 = jiffies; > while ((j1 = jiffies) == j0) > cpu_relax(); > @@ -171,7 +171,7 @@ static inline const struct raid6_calls *raid6_choose_gen( > (*algo)->gen_syndrome(disks, PAGE_SIZE, *dptrs); > perf++; > } > - preempt_enable(); > + migrate_enable(); > > if (perf > bestgenperf) { > bestgenperf = perf; > @@ -186,7 +186,7 @@ static inline const struct raid6_calls *raid6_choose_gen( > > perf = 0; > > - preempt_disable(); > + migrate_disable(); > j0 = jiffies; > while ((j1 = jiffies) == j0) > cpu_relax(); > @@ -196,7 +196,7 @@ static inline const struct raid6_calls *raid6_choose_gen( > PAGE_SIZE, *dptrs); > perf++; > } > - preempt_enable(); > + migrate_enable(); > > if (best == *algo) > bestxorperf = perf; > -- > 2.32.0 >
On Tue, Dec 14, 2021 at 6:14 PM <yajun.deng@linux.dev> wrote: > > December 15, 2021 1:27 AM, "Song Liu" <song@kernel.org> wrote: > > > On Mon, Dec 13, 2021 at 7:17 PM Yajun Deng <yajun.deng@linux.dev> wrote: > > > >> We found an abnormally high latency when executing modprobe raid6_pq, the > >> latency is greater than 1.2s when CONFIG_PREEMPT_VOLUNTARY=y, greater than > >> 67ms when CONFIG_PREEMPT=y, and greater than 16ms when CONFIG_PREEMPT_RT=y. > >> This is caused by disable the preemption, this time is too long and > >> unreasonable. We just need to disable migration. so used migrate_disable()/ > >> migrate_enable() instead of preempt_disable()/preempt_enable(). This is > >> beneficial for CONFIG_PREEMPT=y or CONFIG_PREEMPT_RT=y, but no effect for > >> CONFIG_PREEMPT_VOLUNTARY=y. > >> > >> Fixes: fe5cbc6e06c7 ("md/raid6 algorithms: delta syndrome functions") > >> Signed-off-by: Yajun Deng <yajun.deng@linux.dev> > > > > We measure the speed of different RAID algorithms.If we don't disable > > preempt, the result may be inaccurate, right? IIUC, we only disable preempt > > for 16 jiffies. Why do we see 1.2 second delay? > > Here are the command of my test: > Execute "sudo cyclictest -S -p 95 -d 0 -i 1000 -D 24h -m" in one terminal and "sudo modprobe raid6_pq" in the other terminal. > > Here are the results of my test: > CONFIG_PREEMPT_VOLUNTARY=y,CONFIG_HZ_250=y > T: 0 ( 3092) P:95 I:1000 C: 8514 Min: 1 Act: 2 Avg: 1 Max: 6 > T: 1 ( 3093) P:95 I:1000 C: 8511 Min: 1 Act: 2 Avg: 1 Max: 14 I am not very familiar with the RT work, so please forgive me for some rookie questions.
December 16, 2021 12:52 AM, "Song Liu" <song@kernel.org> wrote: > On Tue, Dec 14, 2021 at 6:14 PM <yajun.deng@linux.dev> wrote: > >> December 15, 2021 1:27 AM, "Song Liu" <song@kernel.org> wrote: >> >> On Mon, Dec 13, 2021 at 7:17 PM Yajun Deng <yajun.deng@linux.dev> wrote: >> >> We found an abnormally high latency when executing modprobe raid6_pq, the >> latency is greater than 1.2s when CONFIG_PREEMPT_VOLUNTARY=y, greater than >> 67ms when CONFIG_PREEMPT=y, and greater than 16ms when CONFIG_PREEMPT_RT=y. >> This is caused by disable the preemption, this time is too long and >> unreasonable. We just need to disable migration. so used migrate_disable()/ >> migrate_enable() instead of preempt_disable()/preempt_enable(). This is >> beneficial for CONFIG_PREEMPT=y or CONFIG_PREEMPT_RT=y, but no effect for >> CONFIG_PREEMPT_VOLUNTARY=y. >> >> Fixes: fe5cbc6e06c7 ("md/raid6 algorithms: delta syndrome functions") >> Signed-off-by: Yajun Deng <yajun.deng@linux.dev> >> >> We measure the speed of different RAID algorithms.If we don't disable >> preempt, the result may be inaccurate, right? IIUC, we only disable preempt >> for 16 jiffies. Why do we see 1.2 second delay? >> >> Here are the command of my test: >> Execute "sudo cyclictest -S -p 95 -d 0 -i 1000 -D 24h -m" in one terminal and "sudo modprobe >> raid6_pq" in the other terminal. >> >> Here are the results of my test: >> CONFIG_PREEMPT_VOLUNTARY=y,CONFIG_HZ_250=y >> T: 0 ( 3092) P:95 I:1000 C: 8514 Min: 1 Act: 2 Avg: 1 Max: 6 >> T: 1 ( 3093) P:95 I:1000 C: 8511 Min: 1 Act: 2 Avg: 1 Max: 14 > > I am not very familiar with the RT work, so please forgive me for some > rookie questions. > > From the result, I think the CONFIG_PREEMPT_VOLUNTARY=y and the > CONFIG_PREEMPT=y cases failed to preempt during the preempt enabled period in > raid6_choose_gen(). Is this expected? > No, This is due to disable preemption causing ksoftirqd fail to schedule, we can use bcc tools see that. > OTOH, the 16ms latency with CONFIG_PREEMPT_RT=y is more or less expected. > Is this acceptable? If not, is 1ms latency acceptable? > Here are the test results after adding patch: CONFIG_PREEMPT=y or CONFIG_PREEMPT_RT=y T: 0 ( 3167) P:95 I:1000 C: 13958 Min: 1 Act: 2 Avg: 1 Max: 5 T: 1 ( 3168) P:95 I:1000 C: 13956 Min: 1 Act: 2 Avg: 1 Max: 7 T: 2 ( 3169) P:95 I:1000 C: 13946 Min: 1 Act: 2 Avg: 1 Max: 12 T: 3 ( 3170) P:95 I:1000 C: 13951 Min: 1 Act: 2 Avg: 1 Max: 5 T: 4 ( 3171) P:95 I:1000 C: 13949 Min: 1 Act: 2 Avg: 1 Max: 3 T: 5 ( 3172) P:95 I:1000 C: 13947 Min: 1 Act: 2 Avg: 1 Max: 16 T: 6 ( 3173) P:95 I:1000 C: 13945 Min: 1 Act: 2 Avg: 2 Max: 7 T: 7 ( 3174) P:95 I:1000 C: 13942 Min: 1 Act: 2 Avg: 1 Max: 3 T: 8 ( 3175) P:95 I:1000 C: 13940 Min: 1 Act: 2 Avg: 1 Max: 3 T: 9 ( 3176) P:95 I:1000 C: 13938 Min: 1 Act: 1 Avg: 1 Max: 3 T:10 ( 3177) P:95 I:1000 C: 13936 Min: 1 Act: 2 Avg: 1 Max: 6 T:11 ( 3178) P:95 I:1000 C: 13933 Min: 1 Act: 2 Avg: 1 Max: 3 T:12 ( 3179) P:95 I:1000 C: 13931 Min: 1 Act: 2 Avg: 1 Max: 4 T:13 ( 3180) P:95 I:1000 C: 13929 Min: 1 Act: 2 Avg: 1 Max: 7 T:14 ( 3181) P:95 I:1000 C: 13927 Min: 1 Act: 2 Avg: 1 Max: 6 T:15 ( 3182) P:95 I:1000 C: 13925 Min: 1 Act: 2 Avg: 1 Max: 4 T:16 ( 3183) P:95 I:1000 C: 13923 Min: 1 Act: 2 Avg: 1 Max: 5 T:17 ( 3184) P:95 I:1000 C: 13921 Min: 1 Act: 2 Avg: 1 Max: 5 T:18 ( 3185) P:95 I:1000 C: 13919 Min: 1 Act: 2 Avg: 1 Max: 4 T:19 ( 3186) P:95 I:1000 C: 13916 Min: 1 Act: 2 Avg: 1 Max: 4 T:20 ( 3187) P:95 I:1000 C: 13914 Min: 1 Act: 2 Avg: 1 Max: 4 T:21 ( 3188) P:95 I:1000 C: 13912 Min: 1 Act: 2 Avg: 1 Max: 10 T:22 ( 3189) P:95 I:1000 C: 13910 Min: 1 Act: 2 Avg: 1 Max: 5 T:23 ( 3190) P:95 I:1000 C: 13908 Min: 1 Act: 2 Avg: 1 Max: 5 T:24 ( 3191) P:95 I:1000 C: 13906 Min: 1 Act: 2 Avg: 1 Max: 18 T:25 ( 3192) P:95 I:1000 C: 13904 Min: 1 Act: 2 Avg: 1 Max: 7 T:26 ( 3193) P:95 I:1000 C: 13902 Min: 1 Act: 2 Avg: 1 Max: 5 T:27 ( 3194) P:95 I:1000 C: 13900 Min: 1 Act: 1 Avg: 1 Max: 11 T:28 ( 3195) P:95 I:1000 C: 13898 Min: 1 Act: 2 Avg: 1 Max: 3 T:29 ( 3196) P:95 I:1000 C: 13896 Min: 1 Act: 2 Avg: 1 Max: 5 T:30 ( 3197) P:95 I:1000 C: 13894 Min: 1 Act: 2 Avg: 1 Max: 4 T:31 ( 3198) P:95 I:1000 C: 13892 Min: 1 Act: 2 Avg: 1 Max: 3 we can see the latency will not greater than 100us,so 1ms latency is also too long for CONFIG_PREEMPT=y or CONFIG_PREEMPT_RT=y. use migrate_disable()/migrate_enable() instead of preempt_disable()/preempt_enable() will not affect the speed of different RAID algorithms and the latency can be reduced to a reasonable range. > Thanks, > Song > > [...]
On Wed, Dec 15, 2021 at 6:15 PM <yajun.deng@linux.dev> wrote: > > December 16, 2021 12:52 AM, "Song Liu" <song@kernel.org> wrote: > > > On Tue, Dec 14, 2021 at 6:14 PM <yajun.deng@linux.dev> wrote: > > > >> December 15, 2021 1:27 AM, "Song Liu" <song@kernel.org> wrote: > >> > >> On Mon, Dec 13, 2021 at 7:17 PM Yajun Deng <yajun.deng@linux.dev> wrote: > >> > >> We found an abnormally high latency when executing modprobe raid6_pq, the > >> latency is greater than 1.2s when CONFIG_PREEMPT_VOLUNTARY=y, greater than > >> 67ms when CONFIG_PREEMPT=y, and greater than 16ms when CONFIG_PREEMPT_RT=y. > >> This is caused by disable the preemption, this time is too long and > >> unreasonable. We just need to disable migration. so used migrate_disable()/ > >> migrate_enable() instead of preempt_disable()/preempt_enable(). This is > >> beneficial for CONFIG_PREEMPT=y or CONFIG_PREEMPT_RT=y, but no effect for > >> CONFIG_PREEMPT_VOLUNTARY=y. > >> > >> Fixes: fe5cbc6e06c7 ("md/raid6 algorithms: delta syndrome functions") > >> Signed-off-by: Yajun Deng <yajun.deng@linux.dev> > >> > >> We measure the speed of different RAID algorithms.If we don't disable > >> preempt, the result may be inaccurate, right? IIUC, we only disable preempt > >> for 16 jiffies. Why do we see 1.2 second delay? > >> > >> Here are the command of my test: > >> Execute "sudo cyclictest -S -p 95 -d 0 -i 1000 -D 24h -m" in one terminal and "sudo modprobe > >> raid6_pq" in the other terminal. > >> > >> Here are the results of my test: > >> CONFIG_PREEMPT_VOLUNTARY=y,CONFIG_HZ_250=y > >> T: 0 ( 3092) P:95 I:1000 C: 8514 Min: 1 Act: 2 Avg: 1 Max: 6 > >> T: 1 ( 3093) P:95 I:1000 C: 8511 Min: 1 Act: 2 Avg: 1 Max: 14 > > > > I am not very familiar with the RT work, so please forgive me for some > > rookie questions. > > > > From the result, I think the CONFIG_PREEMPT_VOLUNTARY=y and the > > CONFIG_PREEMPT=y cases failed to preempt during the preempt enabled period in > > raid6_choose_gen(). Is this expected? > > > > No, This is due to disable preemption causing ksoftirqd fail to schedule, we can use bcc tools see that. > > > OTOH, the 16ms latency with CONFIG_PREEMPT_RT=y is more or less expected. > > Is this acceptable? If not, is 1ms latency acceptable? > > > > Here are the test results after adding patch: > CONFIG_PREEMPT=y or CONFIG_PREEMPT_RT=y > T: 0 ( 3167) P:95 I:1000 C: 13958 Min: 1 Act: 2 Avg: 1 Max: 5 > T: 1 ( 3168) P:95 I:1000 C: 13956 Min: 1 Act: 2 Avg: 1 Max: 7 > T: 2 ( 3169) P:95 I:1000 C: 13946 Min: 1 Act: 2 Avg: 1 Max: 12 > T: 3 ( 3170) P:95 I:1000 C: 13951 Min: 1 Act: 2 Avg: 1 Max: 5 > T: 4 ( 3171) P:95 I:1000 C: 13949 Min: 1 Act: 2 Avg: 1 Max: 3 > T: 5 ( 3172) P:95 I:1000 C: 13947 Min: 1 Act: 2 Avg: 1 Max: 16 > T: 6 ( 3173) P:95 I:1000 C: 13945 Min: 1 Act: 2 Avg: 2 Max: 7 > T: 7 ( 3174) P:95 I:1000 C: 13942 Min: 1 Act: 2 Avg: 1 Max: 3 > T: 8 ( 3175) P:95 I:1000 C: 13940 Min: 1 Act: 2 Avg: 1 Max: 3 > T: 9 ( 3176) P:95 I:1000 C: 13938 Min: 1 Act: 1 Avg: 1 Max: 3 > T:10 ( 3177) P:95 I:1000 C: 13936 Min: 1 Act: 2 Avg: 1 Max: 6 > T:11 ( 3178) P:95 I:1000 C: 13933 Min: 1 Act: 2 Avg: 1 Max: 3 > T:12 ( 3179) P:95 I:1000 C: 13931 Min: 1 Act: 2 Avg: 1 Max: 4 > T:13 ( 3180) P:95 I:1000 C: 13929 Min: 1 Act: 2 Avg: 1 Max: 7 > T:14 ( 3181) P:95 I:1000 C: 13927 Min: 1 Act: 2 Avg: 1 Max: 6 > T:15 ( 3182) P:95 I:1000 C: 13925 Min: 1 Act: 2 Avg: 1 Max: 4 > T:16 ( 3183) P:95 I:1000 C: 13923 Min: 1 Act: 2 Avg: 1 Max: 5 > T:17 ( 3184) P:95 I:1000 C: 13921 Min: 1 Act: 2 Avg: 1 Max: 5 > T:18 ( 3185) P:95 I:1000 C: 13919 Min: 1 Act: 2 Avg: 1 Max: 4 > T:19 ( 3186) P:95 I:1000 C: 13916 Min: 1 Act: 2 Avg: 1 Max: 4 > T:20 ( 3187) P:95 I:1000 C: 13914 Min: 1 Act: 2 Avg: 1 Max: 4 > T:21 ( 3188) P:95 I:1000 C: 13912 Min: 1 Act: 2 Avg: 1 Max: 10 > T:22 ( 3189) P:95 I:1000 C: 13910 Min: 1 Act: 2 Avg: 1 Max: 5 > T:23 ( 3190) P:95 I:1000 C: 13908 Min: 1 Act: 2 Avg: 1 Max: 5 > T:24 ( 3191) P:95 I:1000 C: 13906 Min: 1 Act: 2 Avg: 1 Max: 18 > T:25 ( 3192) P:95 I:1000 C: 13904 Min: 1 Act: 2 Avg: 1 Max: 7 > T:26 ( 3193) P:95 I:1000 C: 13902 Min: 1 Act: 2 Avg: 1 Max: 5 > T:27 ( 3194) P:95 I:1000 C: 13900 Min: 1 Act: 1 Avg: 1 Max: 11 > T:28 ( 3195) P:95 I:1000 C: 13898 Min: 1 Act: 2 Avg: 1 Max: 3 > T:29 ( 3196) P:95 I:1000 C: 13896 Min: 1 Act: 2 Avg: 1 Max: 5 > T:30 ( 3197) P:95 I:1000 C: 13894 Min: 1 Act: 2 Avg: 1 Max: 4 > T:31 ( 3198) P:95 I:1000 C: 13892 Min: 1 Act: 2 Avg: 1 Max: 3 > > we can see the latency will not greater than 100us,so 1ms latency is also too long for CONFIG_PREEMPT=y or CONFIG_PREEMPT_RT=y. > use migrate_disable()/migrate_enable() instead of preempt_disable()/preempt_enable() will not affect the speed of different RAID algorithms and the latency can be reduced to a reasonable range. I think allowing preempt may still affect the speed comparison. But such discrepancy should be acceptable. I will apply this to md-next. Thanks, Song
December 16, 2021 2:39 PM, "Song Liu" <song@kernel.org> wrote: > On Wed, Dec 15, 2021 at 6:15 PM <yajun.deng@linux.dev> wrote: > >> December 16, 2021 12:52 AM, "Song Liu" <song@kernel.org> wrote: >> >> On Tue, Dec 14, 2021 at 6:14 PM <yajun.deng@linux.dev> wrote: >> >> December 15, 2021 1:27 AM, "Song Liu" <song@kernel.org> wrote: >> >> On Mon, Dec 13, 2021 at 7:17 PM Yajun Deng <yajun.deng@linux.dev> wrote: >> >> We found an abnormally high latency when executing modprobe raid6_pq, the >> latency is greater than 1.2s when CONFIG_PREEMPT_VOLUNTARY=y, greater than >> 67ms when CONFIG_PREEMPT=y, and greater than 16ms when CONFIG_PREEMPT_RT=y. >> This is caused by disable the preemption, this time is too long and >> unreasonable. We just need to disable migration. so used migrate_disable()/ >> migrate_enable() instead of preempt_disable()/preempt_enable(). This is >> beneficial for CONFIG_PREEMPT=y or CONFIG_PREEMPT_RT=y, but no effect for >> CONFIG_PREEMPT_VOLUNTARY=y. >> >> Fixes: fe5cbc6e06c7 ("md/raid6 algorithms: delta syndrome functions") >> Signed-off-by: Yajun Deng <yajun.deng@linux.dev> >> >> We measure the speed of different RAID algorithms.If we don't disable >> preempt, the result may be inaccurate, right? IIUC, we only disable preempt >> for 16 jiffies. Why do we see 1.2 second delay? >> >> Here are the command of my test: >> Execute "sudo cyclictest -S -p 95 -d 0 -i 1000 -D 24h -m" in one terminal and "sudo modprobe >> raid6_pq" in the other terminal. >> >> Here are the results of my test: >> CONFIG_PREEMPT_VOLUNTARY=y,CONFIG_HZ_250=y >> T: 0 ( 3092) P:95 I:1000 C: 8514 Min: 1 Act: 2 Avg: 1 Max: 6 >> T: 1 ( 3093) P:95 I:1000 C: 8511 Min: 1 Act: 2 Avg: 1 Max: 14 >> >> I am not very familiar with the RT work, so please forgive me for some >> rookie questions. >> >> From the result, I think the CONFIG_PREEMPT_VOLUNTARY=y and the >> CONFIG_PREEMPT=y cases failed to preempt during the preempt enabled period in >> raid6_choose_gen(). Is this expected? >> >> No, This is due to disable preemption causing ksoftirqd fail to schedule, we can use bcc tools see >> that. >> >> OTOH, the 16ms latency with CONFIG_PREEMPT_RT=y is more or less expected. >> Is this acceptable? If not, is 1ms latency acceptable? >> >> Here are the test results after adding patch: >> CONFIG_PREEMPT=y or CONFIG_PREEMPT_RT=y >> T: 0 ( 3167) P:95 I:1000 C: 13958 Min: 1 Act: 2 Avg: 1 Max: 5 >> T: 1 ( 3168) P:95 I:1000 C: 13956 Min: 1 Act: 2 Avg: 1 Max: 7 >> T: 2 ( 3169) P:95 I:1000 C: 13946 Min: 1 Act: 2 Avg: 1 Max: 12 >> T: 3 ( 3170) P:95 I:1000 C: 13951 Min: 1 Act: 2 Avg: 1 Max: 5 >> T: 4 ( 3171) P:95 I:1000 C: 13949 Min: 1 Act: 2 Avg: 1 Max: 3 >> T: 5 ( 3172) P:95 I:1000 C: 13947 Min: 1 Act: 2 Avg: 1 Max: 16 >> T: 6 ( 3173) P:95 I:1000 C: 13945 Min: 1 Act: 2 Avg: 2 Max: 7 >> T: 7 ( 3174) P:95 I:1000 C: 13942 Min: 1 Act: 2 Avg: 1 Max: 3 >> T: 8 ( 3175) P:95 I:1000 C: 13940 Min: 1 Act: 2 Avg: 1 Max: 3 >> T: 9 ( 3176) P:95 I:1000 C: 13938 Min: 1 Act: 1 Avg: 1 Max: 3 >> T:10 ( 3177) P:95 I:1000 C: 13936 Min: 1 Act: 2 Avg: 1 Max: 6 >> T:11 ( 3178) P:95 I:1000 C: 13933 Min: 1 Act: 2 Avg: 1 Max: 3 >> T:12 ( 3179) P:95 I:1000 C: 13931 Min: 1 Act: 2 Avg: 1 Max: 4 >> T:13 ( 3180) P:95 I:1000 C: 13929 Min: 1 Act: 2 Avg: 1 Max: 7 >> T:14 ( 3181) P:95 I:1000 C: 13927 Min: 1 Act: 2 Avg: 1 Max: 6 >> T:15 ( 3182) P:95 I:1000 C: 13925 Min: 1 Act: 2 Avg: 1 Max: 4 >> T:16 ( 3183) P:95 I:1000 C: 13923 Min: 1 Act: 2 Avg: 1 Max: 5 >> T:17 ( 3184) P:95 I:1000 C: 13921 Min: 1 Act: 2 Avg: 1 Max: 5 >> T:18 ( 3185) P:95 I:1000 C: 13919 Min: 1 Act: 2 Avg: 1 Max: 4 >> T:19 ( 3186) P:95 I:1000 C: 13916 Min: 1 Act: 2 Avg: 1 Max: 4 >> T:20 ( 3187) P:95 I:1000 C: 13914 Min: 1 Act: 2 Avg: 1 Max: 4 >> T:21 ( 3188) P:95 I:1000 C: 13912 Min: 1 Act: 2 Avg: 1 Max: 10 >> T:22 ( 3189) P:95 I:1000 C: 13910 Min: 1 Act: 2 Avg: 1 Max: 5 >> T:23 ( 3190) P:95 I:1000 C: 13908 Min: 1 Act: 2 Avg: 1 Max: 5 >> T:24 ( 3191) P:95 I:1000 C: 13906 Min: 1 Act: 2 Avg: 1 Max: 18 >> T:25 ( 3192) P:95 I:1000 C: 13904 Min: 1 Act: 2 Avg: 1 Max: 7 >> T:26 ( 3193) P:95 I:1000 C: 13902 Min: 1 Act: 2 Avg: 1 Max: 5 >> T:27 ( 3194) P:95 I:1000 C: 13900 Min: 1 Act: 1 Avg: 1 Max: 11 >> T:28 ( 3195) P:95 I:1000 C: 13898 Min: 1 Act: 2 Avg: 1 Max: 3 >> T:29 ( 3196) P:95 I:1000 C: 13896 Min: 1 Act: 2 Avg: 1 Max: 5 >> T:30 ( 3197) P:95 I:1000 C: 13894 Min: 1 Act: 2 Avg: 1 Max: 4 >> T:31 ( 3198) P:95 I:1000 C: 13892 Min: 1 Act: 2 Avg: 1 Max: 3 >> >> we can see the latency will not greater than 100us,so 1ms latency is also too long for >> CONFIG_PREEMPT=y or CONFIG_PREEMPT_RT=y. >> use migrate_disable()/migrate_enable() instead of preempt_disable()/preempt_enable() will not >> affect the speed of different RAID algorithms and the latency can be reduced to a reasonable range. > > I think allowing preempt may still affect the speed comparison. But > such discrepancy > should be acceptable. I will apply this to md-next. Thank you, would you mind backport this patch to all LTS kernel? As you can see, the latency is not a little higher, but too much higher, it seems unreasonable, I think this may be a bug. We are using the 5.10 kernel and it would be very convenient for us if that was done. > Thanks, > Song
Dear Song, dear Yajun, Am 16.12.21 um 07:39 schrieb Song Liu: > On Wed, Dec 15, 2021 at 6:15 PM <yajun.deng@linux.dev> wrote: >> >> December 16, 2021 12:52 AM, "Song Liu" <song@kernel.org> wrote: >> >>> On Tue, Dec 14, 2021 at 6:14 PM <yajun.deng@linux.dev> wrote: >>> >>>> December 15, 2021 1:27 AM, "Song Liu" <song@kernel.org> wrote: >>>> >>>> On Mon, Dec 13, 2021 at 7:17 PM Yajun Deng <yajun.deng@linux.dev> wrote: >>>> >>>> We found an abnormally high latency when executing modprobe raid6_pq, the >>>> latency is greater than 1.2s when CONFIG_PREEMPT_VOLUNTARY=y, greater than >>>> 67ms when CONFIG_PREEMPT=y, and greater than 16ms when CONFIG_PREEMPT_RT=y. >>>> This is caused by disable the preemption, this time is too long and >>>> unreasonable. We just need to disable migration. so used migrate_disable()/ >>>> migrate_enable() instead of preempt_disable()/preempt_enable(). This is >>>> beneficial for CONFIG_PREEMPT=y or CONFIG_PREEMPT_RT=y, but no effect for >>>> CONFIG_PREEMPT_VOLUNTARY=y. >>>> >>>> Fixes: fe5cbc6e06c7 ("md/raid6 algorithms: delta syndrome functions") >>>> Signed-off-by: Yajun Deng <yajun.deng@linux.dev> >>>> >>>> We measure the speed of different RAID algorithms.If we don't disable >>>> preempt, the result may be inaccurate, right? IIUC, we only disable preempt >>>> for 16 jiffies. Why do we see 1.2 second delay? >>>> >>>> Here are the command of my test: >>>> Execute "sudo cyclictest -S -p 95 -d 0 -i 1000 -D 24h -m" in one terminal and "sudo modprobe >>>> raid6_pq" in the other terminal. >>>> >>>> Here are the results of my test: >>>> CONFIG_PREEMPT_VOLUNTARY=y,CONFIG_HZ_250=y >>>> T: 0 ( 3092) P:95 I:1000 C: 8514 Min: 1 Act: 2 Avg: 1 Max: 6 >>>> T: 1 ( 3093) P:95 I:1000 C: 8511 Min: 1 Act: 2 Avg: 1 Max: 14 >>> >>> I am not very familiar with the RT work, so please forgive me for some >>> rookie questions. >>> >>> From the result, I think the CONFIG_PREEMPT_VOLUNTARY=y and the >>> CONFIG_PREEMPT=y cases failed to preempt during the preempt enabled period in >>> raid6_choose_gen(). Is this expected? >>> >> >> No, This is due to disable preemption causing ksoftirqd fail to schedule, we can use bcc tools see that. >> >>> OTOH, the 16ms latency with CONFIG_PREEMPT_RT=y is more or less expected. >>> Is this acceptable? If not, is 1ms latency acceptable? >>> >> >> Here are the test results after adding patch: >> CONFIG_PREEMPT=y or CONFIG_PREEMPT_RT=y >> T: 0 ( 3167) P:95 I:1000 C: 13958 Min: 1 Act: 2 Avg: 1 Max: 5 >> T: 1 ( 3168) P:95 I:1000 C: 13956 Min: 1 Act: 2 Avg: 1 Max: 7 >> T: 2 ( 3169) P:95 I:1000 C: 13946 Min: 1 Act: 2 Avg: 1 Max: 12 >> T: 3 ( 3170) P:95 I:1000 C: 13951 Min: 1 Act: 2 Avg: 1 Max: 5 >> T: 4 ( 3171) P:95 I:1000 C: 13949 Min: 1 Act: 2 Avg: 1 Max: 3 >> T: 5 ( 3172) P:95 I:1000 C: 13947 Min: 1 Act: 2 Avg: 1 Max: 16 >> T: 6 ( 3173) P:95 I:1000 C: 13945 Min: 1 Act: 2 Avg: 2 Max: 7 >> T: 7 ( 3174) P:95 I:1000 C: 13942 Min: 1 Act: 2 Avg: 1 Max: 3 >> T: 8 ( 3175) P:95 I:1000 C: 13940 Min: 1 Act: 2 Avg: 1 Max: 3 >> T: 9 ( 3176) P:95 I:1000 C: 13938 Min: 1 Act: 1 Avg: 1 Max: 3 >> T:10 ( 3177) P:95 I:1000 C: 13936 Min: 1 Act: 2 Avg: 1 Max: 6 >> T:11 ( 3178) P:95 I:1000 C: 13933 Min: 1 Act: 2 Avg: 1 Max: 3 >> T:12 ( 3179) P:95 I:1000 C: 13931 Min: 1 Act: 2 Avg: 1 Max: 4 >> T:13 ( 3180) P:95 I:1000 C: 13929 Min: 1 Act: 2 Avg: 1 Max: 7 >> T:14 ( 3181) P:95 I:1000 C: 13927 Min: 1 Act: 2 Avg: 1 Max: 6 >> T:15 ( 3182) P:95 I:1000 C: 13925 Min: 1 Act: 2 Avg: 1 Max: 4 >> T:16 ( 3183) P:95 I:1000 C: 13923 Min: 1 Act: 2 Avg: 1 Max: 5 >> T:17 ( 3184) P:95 I:1000 C: 13921 Min: 1 Act: 2 Avg: 1 Max: 5 >> T:18 ( 3185) P:95 I:1000 C: 13919 Min: 1 Act: 2 Avg: 1 Max: 4 >> T:19 ( 3186) P:95 I:1000 C: 13916 Min: 1 Act: 2 Avg: 1 Max: 4 >> T:20 ( 3187) P:95 I:1000 C: 13914 Min: 1 Act: 2 Avg: 1 Max: 4 >> T:21 ( 3188) P:95 I:1000 C: 13912 Min: 1 Act: 2 Avg: 1 Max: 10 >> T:22 ( 3189) P:95 I:1000 C: 13910 Min: 1 Act: 2 Avg: 1 Max: 5 >> T:23 ( 3190) P:95 I:1000 C: 13908 Min: 1 Act: 2 Avg: 1 Max: 5 >> T:24 ( 3191) P:95 I:1000 C: 13906 Min: 1 Act: 2 Avg: 1 Max: 18 >> T:25 ( 3192) P:95 I:1000 C: 13904 Min: 1 Act: 2 Avg: 1 Max: 7 >> T:26 ( 3193) P:95 I:1000 C: 13902 Min: 1 Act: 2 Avg: 1 Max: 5 >> T:27 ( 3194) P:95 I:1000 C: 13900 Min: 1 Act: 1 Avg: 1 Max: 11 >> T:28 ( 3195) P:95 I:1000 C: 13898 Min: 1 Act: 2 Avg: 1 Max: 3 >> T:29 ( 3196) P:95 I:1000 C: 13896 Min: 1 Act: 2 Avg: 1 Max: 5 >> T:30 ( 3197) P:95 I:1000 C: 13894 Min: 1 Act: 2 Avg: 1 Max: 4 >> T:31 ( 3198) P:95 I:1000 C: 13892 Min: 1 Act: 2 Avg: 1 Max: 3 >> >> we can see the latency will not greater than 100us,so 1ms latency is also too long for CONFIG_PREEMPT=y or CONFIG_PREEMPT_RT=y. >> use migrate_disable()/migrate_enable() instead of preempt_disable()/preempt_enable() will not affect the speed of different RAID algorithms and the latency can be reduced to a reasonable range. > > I think allowing preempt may still affect the speed comparison. But > such discrepancy should be acceptable. I will apply this to md-next. Could the commit message please be extended, how to reproduce this? No idea, where to find `cyclictest` for example. Was `initcall_debug` used to measure the execution time of the init method? Lastly, only one Fixes: tag is added, but the mentioned one only added one of the changed `preempt_enabled()`/`preempt_disable()`. Should all be listed? The commit message could also say something like: > Reduce high latency by using migrate instead of preempt Kind regards, Paul
December 16, 2021 4:09 PM, "Paul Menzel" <pmenzel@molgen.mpg.de> wrote: > Dear Song, dear Yajun, > > Am 16.12.21 um 07:39 schrieb Song Liu: > >> On Wed, Dec 15, 2021 at 6:15 PM <yajun.deng@linux.dev> wrote: >>> December 16, 2021 12:52 AM, "Song Liu" <song@kernel.org> wrote: >> >> On Tue, Dec 14, 2021 at 6:14 PM <yajun.deng@linux.dev> wrote: >> >> December 15, 2021 1:27 AM, "Song Liu" <song@kernel.org> wrote: >> >> On Mon, Dec 13, 2021 at 7:17 PM Yajun Deng <yajun.deng@linux.dev> wrote: >> >> We found an abnormally high latency when executing modprobe raid6_pq, the >> latency is greater than 1.2s when CONFIG_PREEMPT_VOLUNTARY=y, greater than >> 67ms when CONFIG_PREEMPT=y, and greater than 16ms when CONFIG_PREEMPT_RT=y. >> This is caused by disable the preemption, this time is too long and >> unreasonable. We just need to disable migration. so used migrate_disable()/ >> migrate_enable() instead of preempt_disable()/preempt_enable(). This is >> beneficial for CONFIG_PREEMPT=y or CONFIG_PREEMPT_RT=y, but no effect for >> CONFIG_PREEMPT_VOLUNTARY=y. >> >> Fixes: fe5cbc6e06c7 ("md/raid6 algorithms: delta syndrome functions") >> Signed-off-by: Yajun Deng <yajun.deng@linux.dev> >> >> We measure the speed of different RAID algorithms.If we don't disable >> preempt, the result may be inaccurate, right? IIUC, we only disable preempt >> for 16 jiffies. Why do we see 1.2 second delay? >> >> Here are the command of my test: >> Execute "sudo cyclictest -S -p 95 -d 0 -i 1000 -D 24h -m" in one terminal and "sudo modprobe >> raid6_pq" in the other terminal. >> >> Here are the results of my test: >> CONFIG_PREEMPT_VOLUNTARY=y,CONFIG_HZ_250=y >> T: 0 ( 3092) P:95 I:1000 C: 8514 Min: 1 Act: 2 Avg: 1 Max: 6 >> T: 1 ( 3093) P:95 I:1000 C: 8511 Min: 1 Act: 2 Avg: 1 Max: 14 >> >> I am not very familiar with the RT work, so please forgive me for some >> rookie questions. >> >> From the result, I think the CONFIG_PREEMPT_VOLUNTARY=y and the >> CONFIG_PREEMPT=y cases failed to preempt during the preempt enabled period in >> raid6_choose_gen(). Is this expected? >> >>> No, This is due to disable preemption causing ksoftirqd fail to schedule, we can use bcc tools see >>> that. >> >> OTOH, the 16ms latency with CONFIG_PREEMPT_RT=y is more or less expected. >> Is this acceptable? If not, is 1ms latency acceptable? >> >>> Here are the test results after adding patch: >>> CONFIG_PREEMPT=y or CONFIG_PREEMPT_RT=y >>> T: 0 ( 3167) P:95 I:1000 C: 13958 Min: 1 Act: 2 Avg: 1 Max: 5 >>> T: 1 ( 3168) P:95 I:1000 C: 13956 Min: 1 Act: 2 Avg: 1 Max: 7 >>> T: 2 ( 3169) P:95 I:1000 C: 13946 Min: 1 Act: 2 Avg: 1 Max: 12 >>> T: 3 ( 3170) P:95 I:1000 C: 13951 Min: 1 Act: 2 Avg: 1 Max: 5 >>> T: 4 ( 3171) P:95 I:1000 C: 13949 Min: 1 Act: 2 Avg: 1 Max: 3 >>> T: 5 ( 3172) P:95 I:1000 C: 13947 Min: 1 Act: 2 Avg: 1 Max: 16 >>> T: 6 ( 3173) P:95 I:1000 C: 13945 Min: 1 Act: 2 Avg: 2 Max: 7 >>> T: 7 ( 3174) P:95 I:1000 C: 13942 Min: 1 Act: 2 Avg: 1 Max: 3 >>> T: 8 ( 3175) P:95 I:1000 C: 13940 Min: 1 Act: 2 Avg: 1 Max: 3 >>> T: 9 ( 3176) P:95 I:1000 C: 13938 Min: 1 Act: 1 Avg: 1 Max: 3 >>> T:10 ( 3177) P:95 I:1000 C: 13936 Min: 1 Act: 2 Avg: 1 Max: 6 >>> T:11 ( 3178) P:95 I:1000 C: 13933 Min: 1 Act: 2 Avg: 1 Max: 3 >>> T:12 ( 3179) P:95 I:1000 C: 13931 Min: 1 Act: 2 Avg: 1 Max: 4 >>> T:13 ( 3180) P:95 I:1000 C: 13929 Min: 1 Act: 2 Avg: 1 Max: 7 >>> T:14 ( 3181) P:95 I:1000 C: 13927 Min: 1 Act: 2 Avg: 1 Max: 6 >>> T:15 ( 3182) P:95 I:1000 C: 13925 Min: 1 Act: 2 Avg: 1 Max: 4 >>> T:16 ( 3183) P:95 I:1000 C: 13923 Min: 1 Act: 2 Avg: 1 Max: 5 >>> T:17 ( 3184) P:95 I:1000 C: 13921 Min: 1 Act: 2 Avg: 1 Max: 5 >>> T:18 ( 3185) P:95 I:1000 C: 13919 Min: 1 Act: 2 Avg: 1 Max: 4 >>> T:19 ( 3186) P:95 I:1000 C: 13916 Min: 1 Act: 2 Avg: 1 Max: 4 >>> T:20 ( 3187) P:95 I:1000 C: 13914 Min: 1 Act: 2 Avg: 1 Max: 4 >>> T:21 ( 3188) P:95 I:1000 C: 13912 Min: 1 Act: 2 Avg: 1 Max: 10 >>> T:22 ( 3189) P:95 I:1000 C: 13910 Min: 1 Act: 2 Avg: 1 Max: 5 >>> T:23 ( 3190) P:95 I:1000 C: 13908 Min: 1 Act: 2 Avg: 1 Max: 5 >>> T:24 ( 3191) P:95 I:1000 C: 13906 Min: 1 Act: 2 Avg: 1 Max: 18 >>> T:25 ( 3192) P:95 I:1000 C: 13904 Min: 1 Act: 2 Avg: 1 Max: 7 >>> T:26 ( 3193) P:95 I:1000 C: 13902 Min: 1 Act: 2 Avg: 1 Max: 5 >>> T:27 ( 3194) P:95 I:1000 C: 13900 Min: 1 Act: 1 Avg: 1 Max: 11 >>> T:28 ( 3195) P:95 I:1000 C: 13898 Min: 1 Act: 2 Avg: 1 Max: 3 >>> T:29 ( 3196) P:95 I:1000 C: 13896 Min: 1 Act: 2 Avg: 1 Max: 5 >>> T:30 ( 3197) P:95 I:1000 C: 13894 Min: 1 Act: 2 Avg: 1 Max: 4 >>> T:31 ( 3198) P:95 I:1000 C: 13892 Min: 1 Act: 2 Avg: 1 Max: 3 >>> >>> we can see the latency will not greater than 100us,so 1ms latency is also too long for >>> CONFIG_PREEMPT=y or CONFIG_PREEMPT_RT=y. >>> use migrate_disable()/migrate_enable() instead of preempt_disable()/preempt_enable() will not >>> affect the speed of different RAID algorithms and the latency can be reduced to a reasonable range. >> >> I think allowing preempt may still affect the speed comparison. But >> such discrepancy should be acceptable. I will apply this to md-next. > > Could the commit message please be extended, how to reproduce this? No idea, where to find > `cyclictest` for example. Was `initcall_debug` used to measure the execution time of the init > method? > > Lastly, only one Fixes: tag is added, but the mentioned one only added one of the changed > `preempt_enabled()`/`preempt_disable()`. Should all be listed? > > The commit message could also say something like: > >> Reduce high latency by using migrate instead of preempt > > Kind regards, > > Paul Thank you for your suggestion, I will submit another patch, extending comment message.
Hello, This is perfectly correct and expected result. Further explanation follows down below. On Wed, Dec 15, 2021 at 8:41 PM <yajun.deng@linux.dev> wrote: > > December 15, 2021 1:27 AM, "Song Liu" <song@kernel.org> wrote: > > > On Mon, Dec 13, 2021 at 7:17 PM Yajun Deng <yajun.deng@linux.dev> wrote: > > > >> We found an abnormally high latency when executing modprobe raid6_pq, the > >> latency is greater than 1.2s when CONFIG_PREEMPT_VOLUNTARY=y, greater than > >> 67ms when CONFIG_PREEMPT=y, and greater than 16ms when CONFIG_PREEMPT_RT=y. > >> This is caused by disable the preemption, this time is too long and > >> unreasonable. We just need to disable migration. so used migrate_disable()/ > >> migrate_enable() instead of preempt_disable()/preempt_enable(). This is > >> beneficial for CONFIG_PREEMPT=y or CONFIG_PREEMPT_RT=y, but no effect for > >> CONFIG_PREEMPT_VOLUNTARY=y. > >> > >> Fixes: fe5cbc6e06c7 ("md/raid6 algorithms: delta syndrome functions") > >> Signed-off-by: Yajun Deng <yajun.deng@linux.dev> > > > > We measure the speed of different RAID algorithms.If we don't disable > > preempt, the result may be inaccurate, right? IIUC, we only disable preempt > > for 16 jiffies. Why do we see 1.2 second delay? > > Here are the command of my test: > Execute "sudo cyclictest -S -p 95 -d 0 -i 1000 -D 24h -m" in one terminal and "sudo modprobe raid6_pq" in the other terminal. > > Here are the results of my test: > CONFIG_PREEMPT_VOLUNTARY=y,CONFIG_HZ_250=y > T:17 ( 3109) P:95 I:1000 C: 7255 Min: 1 Act: 2 Avg: 170 Max: 1220832 No preemption is expected with this config so the CPU is busy and occupied for full duration of all the benchmarks. > CONFIG_PREEMPT=y,CONFIG_HZ_250=y > T: 2 ( 2462) P:95 I:1000 C: 49024 Min: 1 Act: 2 Avg: 4 Max: 67888 16 jiffies with 250 Hz equals to 64 ms. So again, this result is perfectly expected. The benchmarks can be preempted in between the individual measurements with preemptible kernel configuration. > CONFIG_PREEMPT_RT=y,CONFIG_HZ_1000=y > T: 6 ( 2561) P:95 I:1000 C: 25284 Min: 3 Act: 3 Avg: 4 Max: 16593 16 jiffies by 1000 Hz is 16 ms. Ditto. So far so good. The thing is the preemption disabled region is introduced here precisely to guard the performance measurements and it cannot be removed. The purpose is to ensure the measurements are protected from external noise and hence the results are reliable and the best performing implementation can be selected. Changing it to migrate disable is incorrect. You can as well just remove it completely as with preemption enabled (migrate disable does not disable preemption) the benchmarks can be flawed with external noise and the results are not reliable anymore. In other words, the latency is desired here by design. Removing the preemption disabled region is a BUG. But really the main thing here is your test scenario is plain wrong. It's a common misunderstanding that you should never see high latency spikes. Sometimes you really should. There is an extended set of operations which should be strictly avoided (with regards to latency applications). Loading kernel modules is one example of them. It is not considered a latency safe operation and hence it is not allowed in production environment! The application (or operator/the system configuration) should ensure all the necessary modules are preloaded in setup phase before the application workload is executed. Ie. before you start the cyclictest. For example here, all the raid related modules *should* be loaded on system boot. So that the initialization does not interfere with the running production application. Makes sense? --nX > > > > Thanks, > > Song > > > >> --- > >> lib/raid6/algos.c | 8 ++++---- > >> 1 file changed, 4 insertions(+), 4 deletions(-) > >> > >> diff --git a/lib/raid6/algos.c b/lib/raid6/algos.c > >> index 6d5e5000fdd7..21611d05c34c 100644 > >> --- a/lib/raid6/algos.c > >> +++ b/lib/raid6/algos.c > >> @@ -162,7 +162,7 @@ static inline const struct raid6_calls *raid6_choose_gen( > >> > >> perf = 0; > >> > >> - preempt_disable(); > >> + migrate_disable(); > >> j0 = jiffies; > >> while ((j1 = jiffies) == j0) > >> cpu_relax(); > >> @@ -171,7 +171,7 @@ static inline const struct raid6_calls *raid6_choose_gen( > >> (*algo)->gen_syndrome(disks, PAGE_SIZE, *dptrs); > >> perf++; > >> } > >> - preempt_enable(); > >> + migrate_enable(); > >> > >> if (perf > bestgenperf) { > >> bestgenperf = perf; > >> @@ -186,7 +186,7 @@ static inline const struct raid6_calls *raid6_choose_gen( > >> > >> perf = 0; > >> > >> - preempt_disable(); > >> + migrate_disable(); > >> j0 = jiffies; > >> while ((j1 = jiffies) == j0) > >> cpu_relax(); > >> @@ -196,7 +196,7 @@ static inline const struct raid6_calls *raid6_choose_gen( > >> PAGE_SIZE, *dptrs); > >> perf++; > >> } > >> - preempt_enable(); > >> + migrate_enable(); > >> > >> if (best == *algo) > >> bestxorperf = perf; > >> -- > >> 2.32.0
On Thu, Dec 16, 2021 at 1:19 PM Song Liu <song@kernel.org> wrote: > I think allowing preempt may still affect the speed comparison. But > such discrepancy > should be acceptable. I will apply this to md-next. No, please. This could eventually result with people starting to report 'sometimes the storage does not perform' or 'random high CPU usage on some boots' kind of issues. We should not touch this. See my other email with detailed explanation. --nX > Thanks, > Song
On Fri, Dec 17, 2021 at 1:43 PM Daniel Vacek <neelx.g@gmail.com> wrote: > > Hello, > > This is perfectly correct and expected result. Further explanation > follows down below. > > On Wed, Dec 15, 2021 at 8:41 PM <yajun.deng@linux.dev> wrote: > > > > December 15, 2021 1:27 AM, "Song Liu" <song@kernel.org> wrote: > > > > > On Mon, Dec 13, 2021 at 7:17 PM Yajun Deng <yajun.deng@linux.dev> wrote: > > > > > >> We found an abnormally high latency when executing modprobe raid6_pq, the > > >> latency is greater than 1.2s when CONFIG_PREEMPT_VOLUNTARY=y, greater than > > >> 67ms when CONFIG_PREEMPT=y, and greater than 16ms when CONFIG_PREEMPT_RT=y. > > >> This is caused by disable the preemption, this time is too long and > > >> unreasonable. We just need to disable migration. so used migrate_disable()/ > > >> migrate_enable() instead of preempt_disable()/preempt_enable(). This is > > >> beneficial for CONFIG_PREEMPT=y or CONFIG_PREEMPT_RT=y, but no effect for > > >> CONFIG_PREEMPT_VOLUNTARY=y. > > >> > > >> Fixes: fe5cbc6e06c7 ("md/raid6 algorithms: delta syndrome functions") > > >> Signed-off-by: Yajun Deng <yajun.deng@linux.dev> > > > > > > We measure the speed of different RAID algorithms.If we don't disable > > > preempt, the result may be inaccurate, right? IIUC, we only disable preempt > > > for 16 jiffies. Why do we see 1.2 second delay? > > > > Here are the command of my test: > > Execute "sudo cyclictest -S -p 95 -d 0 -i 1000 -D 24h -m" in one terminal and "sudo modprobe raid6_pq" in the other terminal. > > > > Here are the results of my test: > > CONFIG_PREEMPT_VOLUNTARY=y,CONFIG_HZ_250=y > > T:17 ( 3109) P:95 I:1000 C: 7255 Min: 1 Act: 2 Avg: 170 Max: 1220832 > > No preemption is expected with this config so the CPU is busy and > occupied for full duration of all the benchmarks. > > > CONFIG_PREEMPT=y,CONFIG_HZ_250=y > > T: 2 ( 2462) P:95 I:1000 C: 49024 Min: 1 Act: 2 Avg: 4 Max: 67888 > > 16 jiffies with 250 Hz equals to 64 ms. So again, this result is > perfectly expected. The benchmarks can be preempted in between the > individual measurements with preemptible kernel configuration. > > > CONFIG_PREEMPT_RT=y,CONFIG_HZ_1000=y > > T: 6 ( 2561) P:95 I:1000 C: 25284 Min: 3 Act: 3 Avg: 4 Max: 16593 > > 16 jiffies by 1000 Hz is 16 ms. Ditto. So far so good. Thanks for the explanation. I misread/misunderstood some of these results. I dropped the patch from md-next. Song
diff --git a/lib/raid6/algos.c b/lib/raid6/algos.c index 6d5e5000fdd7..21611d05c34c 100644 --- a/lib/raid6/algos.c +++ b/lib/raid6/algos.c @@ -162,7 +162,7 @@ static inline const struct raid6_calls *raid6_choose_gen( perf = 0; - preempt_disable(); + migrate_disable(); j0 = jiffies; while ((j1 = jiffies) == j0) cpu_relax(); @@ -171,7 +171,7 @@ static inline const struct raid6_calls *raid6_choose_gen( (*algo)->gen_syndrome(disks, PAGE_SIZE, *dptrs); perf++; } - preempt_enable(); + migrate_enable(); if (perf > bestgenperf) { bestgenperf = perf; @@ -186,7 +186,7 @@ static inline const struct raid6_calls *raid6_choose_gen( perf = 0; - preempt_disable(); + migrate_disable(); j0 = jiffies; while ((j1 = jiffies) == j0) cpu_relax(); @@ -196,7 +196,7 @@ static inline const struct raid6_calls *raid6_choose_gen( PAGE_SIZE, *dptrs); perf++; } - preempt_enable(); + migrate_enable(); if (best == *algo) bestxorperf = perf;
We found an abnormally high latency when executing modprobe raid6_pq, the latency is greater than 1.2s when CONFIG_PREEMPT_VOLUNTARY=y, greater than 67ms when CONFIG_PREEMPT=y, and greater than 16ms when CONFIG_PREEMPT_RT=y. This is caused by disable the preemption, this time is too long and unreasonable. We just need to disable migration. so used migrate_disable()/ migrate_enable() instead of preempt_disable()/preempt_enable(). This is beneficial for CONFIG_PREEMPT=y or CONFIG_PREEMPT_RT=y, but no effect for CONFIG_PREEMPT_VOLUNTARY=y. Fixes: fe5cbc6e06c7 ("md/raid6 algorithms: delta syndrome functions") Signed-off-by: Yajun Deng <yajun.deng@linux.dev> --- lib/raid6/algos.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-)