Message ID | 161340385320.1303470.2392622971006879777.stgit@warthog.procyon.org.uk |
---|---|
Headers | show |
Series | Network fs helper library & fscache kiocb API [ver #3] | expand |
On Mon, Feb 15, 2021 at 06:40:27PM -0600, Steve French wrote: > It could be good if netfs simplifies the problem experienced by > network filesystems on Linux with readahead on large sequential reads > - where we don't get as much parallelism due to only having one > readahead request at a time (thus in many cases there is 'dead time' > on either the network or the file server while waiting for the next > readpages request to be issued). This can be a significant > performance problem for current readpages when network latency is long > (or e.g. in cases when network encryption is enabled, and hardware > offload not available so time consuming on the server or client to > encrypt the packet). > > Do you see netfs much faster than currentreadpages for ceph? > > Have you been able to get much benefit from throttling readahead with > ceph from the current netfs approach for clamping i/o? The switch from readpages to readahead does help in a couple of corner cases. For example, if you have two processes reading the same file at the same time, one will now block on the other (due to the page lock) rather than submitting a mess of overlapping and partial reads. We're not there yet on having multiple outstanding reads. Bill and I had a chat recently about how to make the readahead code detect that it is in a "long fat pipe" situation (as opposed to just dealing with a slow device), and submit extra readahead requests to make best use of the bandwidth and minimise blocking of the application. That's not something for the netfs code to do though; we can get into that situation with highly parallel SSDs.
On Mon, Feb 15, 2021 at 8:10 PM Matthew Wilcox <willy@infradead.org> wrote: > > On Mon, Feb 15, 2021 at 06:40:27PM -0600, Steve French wrote: > > It could be good if netfs simplifies the problem experienced by > > network filesystems on Linux with readahead on large sequential reads > > - where we don't get as much parallelism due to only having one > > readahead request at a time (thus in many cases there is 'dead time' > > on either the network or the file server while waiting for the next > > readpages request to be issued). This can be a significant > > performance problem for current readpages when network latency is long > > (or e.g. in cases when network encryption is enabled, and hardware > > offload not available so time consuming on the server or client to > > encrypt the packet). > > > > Do you see netfs much faster than currentreadpages for ceph? > > > > Have you been able to get much benefit from throttling readahead with > > ceph from the current netfs approach for clamping i/o? > > The switch from readpages to readahead does help in a couple of corner > cases. For example, if you have two processes reading the same file at > the same time, one will now block on the other (due to the page lock) > rather than submitting a mess of overlapping and partial reads. Do you have a simple repro example of this we could try (fio, dbench, iozone etc) to get some objective perf data? My biggest worry is making sure that the switch to netfs doesn't degrade performance (which might be a low bar now since current network file copy perf seems to signifcantly lag at least Windows), and in some easy to understand scenarios want to make sure it actually helps perf.
On Mon, Feb 15, 2021 at 11:22:20PM -0600, Steve French wrote: > On Mon, Feb 15, 2021 at 8:10 PM Matthew Wilcox <willy@infradead.org> wrote: > > The switch from readpages to readahead does help in a couple of corner > > cases. For example, if you have two processes reading the same file at > > the same time, one will now block on the other (due to the page lock) > > rather than submitting a mess of overlapping and partial reads. > > Do you have a simple repro example of this we could try (fio, dbench, iozone > etc) to get some objective perf data? I don't. The problem was noted by the f2fs people, so maybe they have a reproducer. > My biggest worry is making sure that the switch to netfs doesn't degrade > performance (which might be a low bar now since current network file copy > perf seems to signifcantly lag at least Windows), and in some easy to understand > scenarios want to make sure it actually helps perf. I had a question about that ... you've mentioned having 4x4MB reads outstanding as being the way to get optimum performance. Is there a significant performance difference between 4x4MB, 16x1MB and 64x256kB? I'm concerned about having "too large" an I/O on the wire at a given time. For example, with a 1Gbps link, you get 250MB/s. That's a minimum latency of 16us for a 4kB page, but 16ms for a 4MB page. "For very simple tasks, people can perceive latencies down to 2 ms or less" (https://danluu.com/input-lag/) so going all the way to 4MB I/Os takes us into the perceptible latency range, whereas a 256kB I/O is only 1ms. So could you do some experiments with fio doing direct I/O to see if it takes significantly longer to do, say, 1TB of I/O in 4MB chunks vs 256kB chunks? Obviously use threads to keep lots of I/Os outstanding.
On Tue, Feb 23, 2021 at 2:28 PM Matthew Wilcox <willy@infradead.org> wrote: > > On Mon, Feb 15, 2021 at 11:22:20PM -0600, Steve French wrote: > > On Mon, Feb 15, 2021 at 8:10 PM Matthew Wilcox <willy@infradead.org> wrote: > > > The switch from readpages to readahead does help in a couple of corner > > > cases. For example, if you have two processes reading the same file at > > > the same time, one will now block on the other (due to the page lock) > > > rather than submitting a mess of overlapping and partial reads. > > > > Do you have a simple repro example of this we could try (fio, dbench, iozone > > etc) to get some objective perf data? > > I don't. The problem was noted by the f2fs people, so maybe they have a > reproducer. > > > My biggest worry is making sure that the switch to netfs doesn't degrade > > performance (which might be a low bar now since current network file copy > > perf seems to signifcantly lag at least Windows), and in some easy to understand > > scenarios want to make sure it actually helps perf. > > I had a question about that ... you've mentioned having 4x4MB reads > outstanding as being the way to get optimum performance. Is there a > significant performance difference between 4x4MB, 16x1MB and 64x256kB? > I'm concerned about having "too large" an I/O on the wire at a given time. > For example, with a 1Gbps link, you get 250MB/s. That's a minimum > latency of 16us for a 4kB page, but 16ms for a 4MB page. > > "For very simple tasks, people can perceive latencies down to 2 ms or less" > (https://danluu.com/input-lag/) > so going all the way to 4MB I/Os takes us into the perceptible latency > range, whereas a 256kB I/O is only 1ms. > > So could you do some experiments with fio doing direct I/O to see if > it takes significantly longer to do, say, 1TB of I/O in 4MB chunks vs > 256kB chunks? Obviously use threads to keep lots of I/Os outstanding. That is a good question and it has been months since I have done experiments with something similar. Obviously this will vary depending on RDMA or not and multichannel or not - but assuming the 'normal' low end network configuration - ie a 1Gbps link and no RDMA or multichannel I could do some more recent experiments. In the past what I had noticed was that server performance for simple workloads like cp or grep increased with network I/O size to a point: smaller than 256K packet size was bad. Performance improved significantly from 256K to 512K to 1MB, but only very slightly from 1MB to 2MB to 4MB and sometimes degraded at 8MB (IIRC 8MB is the max commonly supported by SMB3 servers), but this is with only one adapter (no multichannel) and 1Gb adapters. But in those examples there wasn't a lot of concurrency on the wire. I did some experiments with increasing the read ahead size (which causes more than one async read to be issued by cifs.ko but presumably does still result in some 'dead time') which seemed to help perf of some sequential read examples (e.g. grep or cp) to some servers but I didn't try enough variety of server targets to feel confident about that change especially if netfs is coming e.g. a change I experimented with was: sb->s_bdi->ra_pages = cifs_sb->ctx->rsize / PAGE_SIZE to sb->s_bdi->ra_pages = 2 * cifs_sb->ctx->rsize / PAGE_SIZE and it did seem to help a little. I would expect that 8x1MB (ie trying to keep eight 1MB reads in process should keep the network mostly busy and not lead to too much dead time on server, client or network) and is 'good enough' in many read ahead use cases (at least for non-RDMA, and non-multichannel on a slower network) to keep the pipe file, and I would expect the performance to be similar to the equivalent using 2MB read (e.g. 4x2MB) and perhaps better than 2x4MB. Below 1MB i/o size on the wire I would expect to see degradation due to packet processing and task switching overhead. Would definitely be worth doing more experimentation here. -- Thanks, Steve
Steve French <smfrench@gmail.com> wrote: > This (readahead behavior improvements in Linux, on single large file > sequential read workloads like cp or grep) gets particularly interesting > with SMB3 as multichannel becomes more common. With one channel having one > readahead request pending on the network is suboptimal - but not as bad as > when multichannel is negotiated. Interestingly in most cases two network > connections to the same server (different TCP sockets,but the same mount, > even in cases where only network adapter) can achieve better performance - > but still significantly lags Windows (and probably other clients) as in > Linux we don't keep multiple I/Os in flight at one time (unless different > files are being read at the same time by different threads). I think it should be relatively straightforward to make the netfs_readahead() function generate multiple read requests. If I wasn't handed sufficient pages by the VM upfront to do two or more read requests, I would need to do extra expansion. There are a couple of ways this could be done: (1) I could expand the readahead_control after fully starting a read request and then create another independent read request, and another for how ever many we want. (2) I could expand the readahead_control first to cover however many requests I'm going to generate, then chop it up into individual read requests. However, generating larger requests means we're more likely to run into a problem for the cache: if we can't allocate enough pages to fill out a cache block, we don't have enough data to write to the cache. Further, if the pages are just unlocked and abandoned, readpage will be called to read them individually - which means they likely won't get cached unless the cache granularity is PAGE_SIZE. But that's probably okay if ENOMEM occurred. There are some other considerations too: (*) I would need to query the filesystem to find out if I should create another request. The fs would have to keep track of how many I/O reqs are in flight and what the limit is. (*) How and where should the readahead triggers be emplaced? I'm guessing that each block would need a trigger and that this should cause more requests to be generated until we hit the limit. (*) I would probably need to shuffle the request generation for the second and subsequent blocks in a single netfs_readahead() call to a worker thread because it'll probably be in a userspace kernel-side context and blocking an application from proceeding and consuming the pages already committed. David
On Wed, Feb 24, 2021 at 01:32:02PM +0000, David Howells wrote: > Steve French <smfrench@gmail.com> wrote: > > > This (readahead behavior improvements in Linux, on single large file > > sequential read workloads like cp or grep) gets particularly interesting > > with SMB3 as multichannel becomes more common. With one channel having one > > readahead request pending on the network is suboptimal - but not as bad as > > when multichannel is negotiated. Interestingly in most cases two network > > connections to the same server (different TCP sockets,but the same mount, > > even in cases where only network adapter) can achieve better performance - > > but still significantly lags Windows (and probably other clients) as in > > Linux we don't keep multiple I/Os in flight at one time (unless different > > files are being read at the same time by different threads). > > I think it should be relatively straightforward to make the netfs_readahead() > function generate multiple read requests. If I wasn't handed sufficient pages > by the VM upfront to do two or more read requests, I would need to do extra > expansion. There are a couple of ways this could be done: I don't think this is a job for netfs_readahead(). We can get into a similar situation with SSDs or RAID arrays where ideally we would have several outstanding readahead requests. If your drive is connected through a 1Gbps link (eg PCIe gen 1 x1) and has a latency of 10ms seek time, with one outstanding read, each read needs to be 12.5MB in size in order to saturate the bus. If the device supports 128 outstanding commands, each read need only be 100kB. We need the core readahead code to handle this situation. My suggestion for doing this is to send off an extra readahead request every time we hit a !Uptodate page. It looks something like this (assuming the app is processing the data fast and always hits the !Uptodate case) ... 1. hit 0, set readahead size to 64kB, mark 32kB as Readahead, send read for 0-64kB wait for 0-64kB to complete 2. hit 32kB (Readahead), no reads outstanding inc readahead size to 128kB, mark 128kB as Readahead, send read for 64k-192kB 3. hit 64kB (!Uptodate), one read outstanding mark 256kB as Readahead, send read for 192-320kB mark 384kB as Readahead, send read for 320-448kB wait for 64-192kB to complete 4. hit 128kB (Readahead), two reads outstanding inc readahead size to 256kB, mark 576kB as Readahead, send read for 448-704kB 5. hit 192kB (!Uptodate), three reads outstanding mark 832kB as Readahead, send read for 704-960kB mark 1088kB as Readahead, send read for 960-1216kB wait for 192-320kB to complete 6. hit 256kB (Readahead), four reads outstanding mark 1344kB as Readahead, send read for 1216-1472kB 7. hit 320kB (!Uptodate), five reads outstanding mark 1600kB as Readahead, send read for 1472-1728kB mark 1856kB as Readahead, send read for 1728-1984kB wait for 320-448kB to complete 8. hit 384kB (Readahead), five reads outstanding mark 2112kB as Readahead, send read for 1984-2240kB 9. hit 448kB (!Uptodate), six reads outstanding mark 2368kB as Readahead, send read for 2240-2496kB mark 2624kB as Readahead, send read for 2496-2752kB wait for 448-704kB to complete 10. hit 576kB (Readahead), seven reads outstanding mark 2880kB as Readahead, send read for 2752-3008kB ... Once we stop hitting !Uptodate pages, we'll maintain the number of pages marked as Readahead, and thus keep the number of readahead requests at the level it determined was necessary to keep the link saturated. I think we may need to put a parallelism cap in the bdi so that a device which is just slow instead of at the end of a long fat pipe doesn't get overwhelmed with requests.