aboutsummaryrefslogtreecommitdiffstats
path: root/fs/netfs/misc.c
diff options
context:
space:
mode:
authorChristian Brauner <brauner@kernel.org>2023-12-28 11:34:22 +0100
committerChristian Brauner <brauner@kernel.org>2023-12-28 11:34:22 +0100
commit86fb59411553c553fe327db067a2435ecb72c80f (patch)
tree530a84a43651d519fd39ab8c29cb06de69eeb0e2 /fs/netfs/misc.c
parentLinux 6.7-rc7 (diff)
parent9p: Use netfslib read/write_iter (diff)
downloadlinux-86fb59411553c553fe327db067a2435ecb72c80f.tar.gz
linux-86fb59411553c553fe327db067a2435ecb72c80f.zip
Merge tag 'netfs-lib-20231228' of ssh://gitolite.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs
Pull netfs updates from David Howells: The main aims of these patches are to get high-level I/O and knowledge of the pagecache out of the filesystem drivers as much as possible and to get rid, as much of possible, of the knowledge that pages/folios exist. Further, I would like to see ->write_begin, ->write_end and ->launder_folio go away. Features that are added by these patches to that which is already there in netfslib: (1) NFS-style (and Ceph-style) locking around DIO vs buffered I/O calls to prevent these from happening at the same time. mmap'd I/O can, of necessity, happen at any time ignoring these locks. (2) Support for unbuffered I/O. The data is kept in the bounce buffer and the pagecache is not used. This can be turned on with an inode flag. (3) Support for direct I/O. This is basically unbuffered I/O with some extra restrictions and no RMW. (4) Support for using a bounce buffer in an operation. The bounce buffer may be bigger than the target data/buffer, allowing for crypto rounding. (5) ->write_begin() and ->write_end() are ignored in favour of merging all of that into one function, netfs_perform_write(), thereby avoiding the function pointer traversals. (6) Support for write-through caching in the pagecache. netfs_perform_write() adds the pages is modifies to an I/O operation as it goes and directly marks them writeback rather than dirty. When writing back from write-through, it limits the range written back. This should allow CIFS to deal with byte-range mandatory locks correctly. (7) O_*SYNC and RWF_*SYNC writes use write-through rather than writing to the pagecache and then flushing afterwards. An AIO O_*SYNC write will notify of completion when the sub-writes all complete. (8) Support for write-streaming where modifed data is held in !uptodate folios, with a private struct attached indicating the range that is valid. (9) Support for write grouping, multiplexing a pointer to a group in the folio private data with the write-streaming data. The writepages algorithm only writes stuff back that's in the nominated group. This is intended for use by Ceph to write is snaps in order. (10) Skipping reads for which we know the server could only supply zeros or EOF (for instance if we've done a local write that leaves a hole in the file and extends the local inode size). General notes: (1) The fscache module is merged into the netfslib module to avoid cyclic exported symbol usage that prevents either module from being loaded. (2) Some helpers from fscache are reassigned to netfslib by name. (3) netfslib now makes use of folio->private, which means the filesystem can't use it. (4) The filesystem provides wrappers to call the write helpers, allowing it to do pre-validation, oplock/capability fetching and the passing in of write group info. (5) I want to try flushing the data when tearing down an inode before invalidating it to try and render launder_folio unnecessary. (6) Write-through caching will generate and dispatch write subrequests as it gathers enough data to hit wsize and has whole pages that at least span that size. This needs to be a bit more flexible, allowing for a filesystem such as CIFS to have a variable wsize. (7) The filesystem driver is just given read and write calls with an iov_iter describing the data/buffer to use. Ideally, they don't see pages or folios at all. A function, extract_iter_to_sg(), is already available to decant part of an iterator into a scatterlist for crypto purposes. AFS notes: (1) I pushed a pair of patches that clean up the trace header down to the base so that they can be shared with another branch. 9P notes: (1) Most of xfstests now pass - more, in fact, since upstream 9p lacks a writepages method and can't handle mmap writes. An occasional oops (and sometimes panic) happens somewhere in the pathwalk/FID handling code that is unrelated to these changes. (2) Writes should now occur in larger-than-page-sized chunks. (3) It should be possible to turn on multipage folio support in 9P now. All in all these patches remove a little over 800 lines from AFS, 300 from 9P, albeit with around 3000 lines added to netfs. Hopefully, I will be able to remove a bunch of lines from Ceph too. I've split the CIFS patches out to a separate branch, cifs-netfs, where a further 2000+ lines are removed. I can run a certain amount of xfstests on CIFS, though I'm running into ksmbd issues and not all the tests work correctly because of issues between fallocate and what the SMB protocol actually supports. I've also dropped the content-crypto patches out for the moment as they're only usable by the ceph changes which I'm still working on. The patch to use PG_writeback instead of PG_fscache for writing to the cache has also been deferred, pending 9p, afs, ceph and cifs all being converted. * tag 'netfs-lib-20231228' of ssh://gitolite.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs: (40 commits) 9p: Use netfslib read/write_iter afs: Use the netfs write helpers netfs: Export the netfs_sreq tracepoint netfs: Optimise away reads above the point at which there can be no data netfs: Implement a write-through caching option netfs: Provide a launder_folio implementation netfs: Provide a writepages implementation netfs, cachefiles: Pass upper bound length to allow expansion netfs: Provide netfs_file_read_iter() netfs: Allow buffered shared-writeable mmap through netfs_page_mkwrite() netfs: Implement buffered write API netfs: Implement unbuffered/DIO write support netfs: Implement unbuffered/DIO read support netfs: Allocate multipage folios in the writepath netfs: Make netfs_read_folio() handle streaming-write pages netfs: Provide func to copy data to pagecache for buffered write netfs: Dispatch write requests to process a writeback slice netfs: Prep to use folio->private for write grouping and streaming write netfs: Make the refcounting of netfs_begin_read() easier to use netfs: Make netfs_put_request() handle a NULL pointer ... Signed-off-by: Christian Brauner <brauner@kernel.org>
Diffstat (limited to 'fs/netfs/misc.c')
-rw-r--r--fs/netfs/misc.c260
1 files changed, 260 insertions, 0 deletions
diff --git a/fs/netfs/misc.c b/fs/netfs/misc.c
new file mode 100644
index 000000000000..0e3af37fc924
--- /dev/null
+++ b/fs/netfs/misc.c
@@ -0,0 +1,260 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/* Miscellaneous routines.
+ *
+ * Copyright (C) 2023 Red Hat, Inc. All Rights Reserved.
+ * Written by David Howells (dhowells@redhat.com)
+ */
+
+#include <linux/swap.h>
+#include "internal.h"
+
+/*
+ * Attach a folio to the buffer and maybe set marks on it to say that we need
+ * to put the folio later and twiddle the pagecache flags.
+ */
+int netfs_xa_store_and_mark(struct xarray *xa, unsigned long index,
+ struct folio *folio, unsigned int flags,
+ gfp_t gfp_mask)
+{
+ XA_STATE_ORDER(xas, xa, index, folio_order(folio));
+
+retry:
+ xas_lock(&xas);
+ for (;;) {
+ xas_store(&xas, folio);
+ if (!xas_error(&xas))
+ break;
+ xas_unlock(&xas);
+ if (!xas_nomem(&xas, gfp_mask))
+ return xas_error(&xas);
+ goto retry;
+ }
+
+ if (flags & NETFS_FLAG_PUT_MARK)
+ xas_set_mark(&xas, NETFS_BUF_PUT_MARK);
+ if (flags & NETFS_FLAG_PAGECACHE_MARK)
+ xas_set_mark(&xas, NETFS_BUF_PAGECACHE_MARK);
+ xas_unlock(&xas);
+ return xas_error(&xas);
+}
+
+/*
+ * Create the specified range of folios in the buffer attached to the read
+ * request. The folios are marked with NETFS_BUF_PUT_MARK so that we know that
+ * these need freeing later.
+ */
+int netfs_add_folios_to_buffer(struct xarray *buffer,
+ struct address_space *mapping,
+ pgoff_t index, pgoff_t to, gfp_t gfp_mask)
+{
+ struct folio *folio;
+ int ret;
+
+ if (to + 1 == index) /* Page range is inclusive */
+ return 0;
+
+ do {
+ /* TODO: Figure out what order folio can be allocated here */
+ folio = filemap_alloc_folio(readahead_gfp_mask(mapping), 0);
+ if (!folio)
+ return -ENOMEM;
+ folio->index = index;
+ ret = netfs_xa_store_and_mark(buffer, index, folio,
+ NETFS_FLAG_PUT_MARK, gfp_mask);
+ if (ret < 0) {
+ folio_put(folio);
+ return ret;
+ }
+
+ index += folio_nr_pages(folio);
+ } while (index <= to && index != 0);
+
+ return 0;
+}
+
+/*
+ * Clear an xarray buffer, putting a ref on the folios that have
+ * NETFS_BUF_PUT_MARK set.
+ */
+void netfs_clear_buffer(struct xarray *buffer)
+{
+ struct folio *folio;
+ XA_STATE(xas, buffer, 0);
+
+ rcu_read_lock();
+ xas_for_each_marked(&xas, folio, ULONG_MAX, NETFS_BUF_PUT_MARK) {
+ folio_put(folio);
+ }
+ rcu_read_unlock();
+ xa_destroy(buffer);
+}
+
+/**
+ * netfs_dirty_folio - Mark folio dirty and pin a cache object for writeback
+ * @mapping: The mapping the folio belongs to.
+ * @folio: The folio being dirtied.
+ *
+ * Set the dirty flag on a folio and pin an in-use cache object in memory so
+ * that writeback can later write to it. This is intended to be called from
+ * the filesystem's ->dirty_folio() method.
+ *
+ * Return: true if the dirty flag was set on the folio, false otherwise.
+ */
+bool netfs_dirty_folio(struct address_space *mapping, struct folio *folio)
+{
+ struct inode *inode = mapping->host;
+ struct netfs_inode *ictx = netfs_inode(inode);
+ struct fscache_cookie *cookie = netfs_i_cookie(ictx);
+ bool need_use = false;
+
+ _enter("");
+
+ if (!filemap_dirty_folio(mapping, folio))
+ return false;
+ if (!fscache_cookie_valid(cookie))
+ return true;
+
+ if (!(inode->i_state & I_PINNING_NETFS_WB)) {
+ spin_lock(&inode->i_lock);
+ if (!(inode->i_state & I_PINNING_NETFS_WB)) {
+ inode->i_state |= I_PINNING_NETFS_WB;
+ need_use = true;
+ }
+ spin_unlock(&inode->i_lock);
+
+ if (need_use)
+ fscache_use_cookie(cookie, true);
+ }
+ return true;
+}
+EXPORT_SYMBOL(netfs_dirty_folio);
+
+/**
+ * netfs_unpin_writeback - Unpin writeback resources
+ * @inode: The inode on which the cookie resides
+ * @wbc: The writeback control
+ *
+ * Unpin the writeback resources pinned by netfs_dirty_folio(). This is
+ * intended to be called as/by the netfs's ->write_inode() method.
+ */
+int netfs_unpin_writeback(struct inode *inode, struct writeback_control *wbc)
+{
+ struct fscache_cookie *cookie = netfs_i_cookie(netfs_inode(inode));
+
+ if (wbc->unpinned_netfs_wb)
+ fscache_unuse_cookie(cookie, NULL, NULL);
+ return 0;
+}
+EXPORT_SYMBOL(netfs_unpin_writeback);
+
+/**
+ * netfs_clear_inode_writeback - Clear writeback resources pinned by an inode
+ * @inode: The inode to clean up
+ * @aux: Auxiliary data to apply to the inode
+ *
+ * Clear any writeback resources held by an inode when the inode is evicted.
+ * This must be called before clear_inode() is called.
+ */
+void netfs_clear_inode_writeback(struct inode *inode, const void *aux)
+{
+ struct fscache_cookie *cookie = netfs_i_cookie(netfs_inode(inode));
+
+ if (inode->i_state & I_PINNING_NETFS_WB) {
+ loff_t i_size = i_size_read(inode);
+ fscache_unuse_cookie(cookie, aux, &i_size);
+ }
+}
+EXPORT_SYMBOL(netfs_clear_inode_writeback);
+
+/**
+ * netfs_invalidate_folio - Invalidate or partially invalidate a folio
+ * @folio: Folio proposed for release
+ * @offset: Offset of the invalidated region
+ * @length: Length of the invalidated region
+ *
+ * Invalidate part or all of a folio for a network filesystem. The folio will
+ * be removed afterwards if the invalidated region covers the entire folio.
+ */
+void netfs_invalidate_folio(struct folio *folio, size_t offset, size_t length)
+{
+ struct netfs_folio *finfo = NULL;
+ size_t flen = folio_size(folio);
+
+ _enter("{%lx},%zx,%zx", folio_index(folio), offset, length);
+
+ folio_wait_fscache(folio);
+
+ if (!folio_test_private(folio))
+ return;
+
+ finfo = netfs_folio_info(folio);
+
+ if (offset == 0 && length >= flen)
+ goto erase_completely;
+
+ if (finfo) {
+ /* We have a partially uptodate page from a streaming write. */
+ unsigned int fstart = finfo->dirty_offset;
+ unsigned int fend = fstart + finfo->dirty_len;
+ unsigned int end = offset + length;
+
+ if (offset >= fend)
+ return;
+ if (end <= fstart)
+ return;
+ if (offset <= fstart && end >= fend)
+ goto erase_completely;
+ if (offset <= fstart && end > fstart)
+ goto reduce_len;
+ if (offset > fstart && end >= fend)
+ goto move_start;
+ /* A partial write was split. The caller has already zeroed
+ * it, so just absorb the hole.
+ */
+ }
+ return;
+
+erase_completely:
+ netfs_put_group(netfs_folio_group(folio));
+ folio_detach_private(folio);
+ folio_clear_uptodate(folio);
+ kfree(finfo);
+ return;
+reduce_len:
+ finfo->dirty_len = offset + length - finfo->dirty_offset;
+ return;
+move_start:
+ finfo->dirty_len -= offset - finfo->dirty_offset;
+ finfo->dirty_offset = offset;
+}
+EXPORT_SYMBOL(netfs_invalidate_folio);
+
+/**
+ * netfs_release_folio - Try to release a folio
+ * @folio: Folio proposed for release
+ * @gfp: Flags qualifying the release
+ *
+ * Request release of a folio and clean up its private state if it's not busy.
+ * Returns true if the folio can now be released, false if not
+ */
+bool netfs_release_folio(struct folio *folio, gfp_t gfp)
+{
+ struct netfs_inode *ctx = netfs_inode(folio_inode(folio));
+ unsigned long long end;
+
+ end = folio_pos(folio) + folio_size(folio);
+ if (end > ctx->zero_point)
+ ctx->zero_point = end;
+
+ if (folio_test_private(folio))
+ return false;
+ if (folio_test_fscache(folio)) {
+ if (current_is_kswapd() || !(gfp & __GFP_FS))
+ return false;
+ folio_wait_fscache(folio);
+ }
+
+ fscache_note_page_release(netfs_i_cookie(ctx));
+ return true;
+}
+EXPORT_SYMBOL(netfs_release_folio);