2005-04-16 16:20:36 -06:00
|
|
|
|
2005-09-09 14:10:19 -06:00
|
|
|
Overview of the Linux Virtual File System
|
2005-04-16 16:20:36 -06:00
|
|
|
|
2005-09-09 14:10:19 -06:00
|
|
|
Original author: Richard Gooch <rgooch@atnf.csiro.au>
|
2005-04-16 16:20:36 -06:00
|
|
|
|
2005-09-09 14:10:19 -06:00
|
|
|
Last updated on August 25, 2005
|
2005-04-16 16:20:36 -06:00
|
|
|
|
2005-09-09 14:10:19 -06:00
|
|
|
Copyright (C) 1999 Richard Gooch
|
|
|
|
Copyright (C) 2005 Pekka Enberg
|
2005-04-16 16:20:36 -06:00
|
|
|
|
2005-09-09 14:10:19 -06:00
|
|
|
This file is released under the GPLv2.
|
2005-04-16 16:20:36 -06:00
|
|
|
|
|
|
|
|
2005-09-09 14:10:19 -06:00
|
|
|
What is it?
|
2005-04-16 16:20:36 -06:00
|
|
|
===========
|
|
|
|
|
|
|
|
The Virtual File System (otherwise known as the Virtual Filesystem
|
|
|
|
Switch) is the software layer in the kernel that provides the
|
|
|
|
filesystem interface to userspace programs. It also provides an
|
|
|
|
abstraction within the kernel which allows different filesystem
|
2005-09-09 14:10:19 -06:00
|
|
|
implementations to coexist.
|
2005-04-16 16:20:36 -06:00
|
|
|
|
|
|
|
|
2005-09-09 14:10:19 -06:00
|
|
|
A Quick Look At How It Works
|
2005-04-16 16:20:36 -06:00
|
|
|
============================
|
|
|
|
|
|
|
|
In this section I'll briefly describe how things work, before
|
|
|
|
launching into the details. I'll start with describing what happens
|
|
|
|
when user programs open and manipulate files, and then look from the
|
|
|
|
other view which is how a filesystem is supported and subsequently
|
|
|
|
mounted.
|
|
|
|
|
2005-09-09 14:10:19 -06:00
|
|
|
|
|
|
|
Opening a File
|
2005-04-16 16:20:36 -06:00
|
|
|
--------------
|
|
|
|
|
|
|
|
The VFS implements the open(2), stat(2), chmod(2) and similar system
|
|
|
|
calls. The pathname argument is used by the VFS to search through the
|
|
|
|
directory entry cache (dentry cache or "dcache"). This provides a very
|
|
|
|
fast look-up mechanism to translate a pathname (filename) into a
|
|
|
|
specific dentry.
|
|
|
|
|
|
|
|
An individual dentry usually has a pointer to an inode. Inodes are the
|
|
|
|
things that live on disc drives, and can be regular files (you know:
|
|
|
|
those things that you write data into), directories, FIFOs and other
|
|
|
|
beasts. Dentries live in RAM and are never saved to disc: they exist
|
|
|
|
only for performance. Inodes live on disc and are copied into memory
|
|
|
|
when required. Later any changes are written back to disc. The inode
|
|
|
|
that lives in RAM is a VFS inode, and it is this which the dentry
|
|
|
|
points to. A single inode can be pointed to by multiple dentries
|
|
|
|
(think about hardlinks).
|
|
|
|
|
|
|
|
The dcache is meant to be a view into your entire filespace. Unlike
|
|
|
|
Linus, most of us losers can't fit enough dentries into RAM to cover
|
|
|
|
all of our filespace, so the dcache has bits missing. In order to
|
|
|
|
resolve your pathname into a dentry, the VFS may have to resort to
|
|
|
|
creating dentries along the way, and then loading the inode. This is
|
|
|
|
done by looking up the inode.
|
|
|
|
|
|
|
|
To look up an inode (usually read from disc) requires that the VFS
|
|
|
|
calls the lookup() method of the parent directory inode. This method
|
|
|
|
is installed by the specific filesystem implementation that the inode
|
|
|
|
lives in. There will be more on this later.
|
|
|
|
|
|
|
|
Once the VFS has the required dentry (and hence the inode), we can do
|
|
|
|
all those boring things like open(2) the file, or stat(2) it to peek
|
|
|
|
at the inode data. The stat(2) operation is fairly simple: once the
|
|
|
|
VFS has the dentry, it peeks at the inode data and passes some of it
|
|
|
|
back to userspace.
|
|
|
|
|
|
|
|
Opening a file requires another operation: allocation of a file
|
|
|
|
structure (this is the kernel-side implementation of file
|
2005-09-09 14:10:19 -06:00
|
|
|
descriptors). The freshly allocated file structure is initialized with
|
2005-04-16 16:20:36 -06:00
|
|
|
a pointer to the dentry and a set of file operation member functions.
|
|
|
|
These are taken from the inode data. The open() file method is then
|
|
|
|
called so the specific filesystem implementation can do it's work. You
|
|
|
|
can see that this is another switch performed by the VFS.
|
|
|
|
|
|
|
|
The file structure is placed into the file descriptor table for the
|
|
|
|
process.
|
|
|
|
|
|
|
|
Reading, writing and closing files (and other assorted VFS operations)
|
|
|
|
is done by using the userspace file descriptor to grab the appropriate
|
|
|
|
file structure, and then calling the required file structure method
|
|
|
|
function to do whatever is required.
|
|
|
|
|
|
|
|
For as long as the file is open, it keeps the dentry "open" (in use),
|
|
|
|
which in turn means that the VFS inode is still in use.
|
|
|
|
|
|
|
|
All VFS system calls (i.e. open(2), stat(2), read(2), write(2),
|
|
|
|
chmod(2) and so on) are called from a process context. You should
|
|
|
|
assume that these calls are made without any kernel locks being
|
|
|
|
held. This means that the processes may be executing the same piece of
|
|
|
|
filesystem or driver code at the same time, on different
|
|
|
|
processors. You should ensure that access to shared resources is
|
|
|
|
protected by appropriate locks.
|
|
|
|
|
2005-09-09 14:10:19 -06:00
|
|
|
|
|
|
|
Registering and Mounting a Filesystem
|
2005-04-16 16:20:36 -06:00
|
|
|
-------------------------------------
|
|
|
|
|
|
|
|
If you want to support a new kind of filesystem in the kernel, all you
|
|
|
|
need to do is call register_filesystem(). You pass a structure
|
|
|
|
describing the filesystem implementation (struct file_system_type)
|
|
|
|
which is then added to an internal table of supported filesystems. You
|
|
|
|
can do:
|
|
|
|
|
|
|
|
% cat /proc/filesystems
|
|
|
|
|
|
|
|
to see what filesystems are currently available on your system.
|
|
|
|
|
|
|
|
When a request is made to mount a block device onto a directory in
|
|
|
|
your filespace the VFS will call the appropriate method for the
|
|
|
|
specific filesystem. The dentry for the mount point will then be
|
|
|
|
updated to point to the root inode for the new filesystem.
|
|
|
|
|
|
|
|
It's now time to look at things in more detail.
|
|
|
|
|
|
|
|
|
2005-09-09 14:10:19 -06:00
|
|
|
struct file_system_type
|
2005-04-16 16:20:36 -06:00
|
|
|
=======================
|
|
|
|
|
2005-09-09 14:10:19 -06:00
|
|
|
This describes the filesystem. As of kernel 2.6.13, the following
|
2005-04-16 16:20:36 -06:00
|
|
|
members are defined:
|
|
|
|
|
|
|
|
struct file_system_type {
|
|
|
|
const char *name;
|
|
|
|
int fs_flags;
|
2005-09-09 14:10:19 -06:00
|
|
|
struct super_block *(*get_sb) (struct file_system_type *, int,
|
|
|
|
const char *, void *);
|
|
|
|
void (*kill_sb) (struct super_block *);
|
|
|
|
struct module *owner;
|
|
|
|
struct file_system_type * next;
|
|
|
|
struct list_head fs_supers;
|
2005-04-16 16:20:36 -06:00
|
|
|
};
|
|
|
|
|
|
|
|
name: the name of the filesystem type, such as "ext2", "iso9660",
|
|
|
|
"msdos" and so on
|
|
|
|
|
|
|
|
fs_flags: various flags (i.e. FS_REQUIRES_DEV, FS_NO_DCACHE, etc.)
|
|
|
|
|
2005-09-09 14:10:19 -06:00
|
|
|
get_sb: the method to call when a new instance of this
|
2005-04-16 16:20:36 -06:00
|
|
|
filesystem should be mounted
|
|
|
|
|
2005-09-09 14:10:19 -06:00
|
|
|
kill_sb: the method to call when an instance of this filesystem
|
|
|
|
should be unmounted
|
|
|
|
|
|
|
|
owner: for internal VFS use: you should initialize this to THIS_MODULE in
|
|
|
|
most cases.
|
2005-04-16 16:20:36 -06:00
|
|
|
|
2005-09-09 14:10:19 -06:00
|
|
|
next: for internal VFS use: you should initialize this to NULL
|
|
|
|
|
|
|
|
The get_sb() method has the following arguments:
|
2005-04-16 16:20:36 -06:00
|
|
|
|
|
|
|
struct super_block *sb: the superblock structure. This is partially
|
2005-09-09 14:10:19 -06:00
|
|
|
initialized by the VFS and the rest must be initialized by the
|
|
|
|
get_sb() method
|
|
|
|
|
|
|
|
int flags: mount flags
|
|
|
|
|
|
|
|
const char *dev_name: the device name we are mounting.
|
2005-04-16 16:20:36 -06:00
|
|
|
|
|
|
|
void *data: arbitrary mount options, usually comes as an ASCII
|
|
|
|
string
|
|
|
|
|
|
|
|
int silent: whether or not to be silent on error
|
|
|
|
|
2005-09-09 14:10:19 -06:00
|
|
|
The get_sb() method must determine if the block device specified
|
2005-04-16 16:20:36 -06:00
|
|
|
in the superblock contains a filesystem of the type the method
|
|
|
|
supports. On success the method returns the superblock pointer, on
|
|
|
|
failure it returns NULL.
|
|
|
|
|
|
|
|
The most interesting member of the superblock structure that the
|
2005-09-09 14:10:19 -06:00
|
|
|
get_sb() method fills in is the "s_op" field. This is a pointer to
|
2005-04-16 16:20:36 -06:00
|
|
|
a "struct super_operations" which describes the next level of the
|
|
|
|
filesystem implementation.
|
|
|
|
|
2005-09-09 14:10:19 -06:00
|
|
|
Usually, a filesystem uses generic one of the generic get_sb()
|
|
|
|
implementations and provides a fill_super() method instead. The
|
|
|
|
generic methods are:
|
|
|
|
|
|
|
|
get_sb_bdev: mount a filesystem residing on a block device
|
2005-04-16 16:20:36 -06:00
|
|
|
|
2005-09-09 14:10:19 -06:00
|
|
|
get_sb_nodev: mount a filesystem that is not backed by a device
|
|
|
|
|
|
|
|
get_sb_single: mount a filesystem which shares the instance between
|
|
|
|
all mounts
|
|
|
|
|
|
|
|
A fill_super() method implementation has the following arguments:
|
|
|
|
|
|
|
|
struct super_block *sb: the superblock structure. The method fill_super()
|
|
|
|
must initialize this properly.
|
|
|
|
|
|
|
|
void *data: arbitrary mount options, usually comes as an ASCII
|
|
|
|
string
|
|
|
|
|
|
|
|
int silent: whether or not to be silent on error
|
|
|
|
|
|
|
|
|
|
|
|
struct super_operations
|
2005-04-16 16:20:36 -06:00
|
|
|
=======================
|
|
|
|
|
|
|
|
This describes how the VFS can manipulate the superblock of your
|
2005-09-09 14:10:19 -06:00
|
|
|
filesystem. As of kernel 2.6.13, the following members are defined:
|
2005-04-16 16:20:36 -06:00
|
|
|
|
|
|
|
struct super_operations {
|
2005-09-09 14:10:19 -06:00
|
|
|
struct inode *(*alloc_inode)(struct super_block *sb);
|
|
|
|
void (*destroy_inode)(struct inode *);
|
|
|
|
|
|
|
|
void (*read_inode) (struct inode *);
|
|
|
|
|
|
|
|
void (*dirty_inode) (struct inode *);
|
|
|
|
int (*write_inode) (struct inode *, int);
|
|
|
|
void (*put_inode) (struct inode *);
|
|
|
|
void (*drop_inode) (struct inode *);
|
|
|
|
void (*delete_inode) (struct inode *);
|
|
|
|
void (*put_super) (struct super_block *);
|
|
|
|
void (*write_super) (struct super_block *);
|
|
|
|
int (*sync_fs)(struct super_block *sb, int wait);
|
|
|
|
void (*write_super_lockfs) (struct super_block *);
|
|
|
|
void (*unlockfs) (struct super_block *);
|
|
|
|
int (*statfs) (struct super_block *, struct kstatfs *);
|
|
|
|
int (*remount_fs) (struct super_block *, int *, char *);
|
|
|
|
void (*clear_inode) (struct inode *);
|
|
|
|
void (*umount_begin) (struct super_block *);
|
|
|
|
|
|
|
|
void (*sync_inodes) (struct super_block *sb,
|
|
|
|
struct writeback_control *wbc);
|
|
|
|
int (*show_options)(struct seq_file *, struct vfsmount *);
|
|
|
|
|
|
|
|
ssize_t (*quota_read)(struct super_block *, int, char *, size_t, loff_t);
|
|
|
|
ssize_t (*quota_write)(struct super_block *, int, const char *, size_t, loff_t);
|
2005-04-16 16:20:36 -06:00
|
|
|
};
|
|
|
|
|
|
|
|
All methods are called without any locks being held, unless otherwise
|
|
|
|
noted. This means that most methods can block safely. All methods are
|
|
|
|
only called from a process context (i.e. not from an interrupt handler
|
|
|
|
or bottom half).
|
|
|
|
|
2005-09-09 14:10:19 -06:00
|
|
|
alloc_inode: this method is called by inode_alloc() to allocate memory
|
|
|
|
for struct inode and initialize it.
|
|
|
|
|
|
|
|
destroy_inode: this method is called by destroy_inode() to release
|
|
|
|
resources allocated for struct inode.
|
|
|
|
|
2005-04-16 16:20:36 -06:00
|
|
|
read_inode: this method is called to read a specific inode from the
|
2005-09-09 14:10:19 -06:00
|
|
|
mounted filesystem. The i_ino member in the struct inode is
|
|
|
|
initialized by the VFS to indicate which inode to read. Other
|
|
|
|
members are filled in by this method.
|
|
|
|
|
|
|
|
You can set this to NULL and use iget5_locked() instead of iget()
|
|
|
|
to read inodes. This is necessary for filesystems for which the
|
|
|
|
inode number is not sufficient to identify an inode.
|
|
|
|
|
|
|
|
dirty_inode: this method is called by the VFS to mark an inode dirty.
|
2005-04-16 16:20:36 -06:00
|
|
|
|
|
|
|
write_inode: this method is called when the VFS needs to write an
|
|
|
|
inode to disc. The second parameter indicates whether the write
|
|
|
|
should be synchronous or not, not all filesystems check this flag.
|
|
|
|
|
|
|
|
put_inode: called when the VFS inode is removed from the inode
|
2005-09-09 14:10:19 -06:00
|
|
|
cache.
|
2005-04-16 16:20:36 -06:00
|
|
|
|
|
|
|
drop_inode: called when the last access to the inode is dropped,
|
|
|
|
with the inode_lock spinlock held.
|
|
|
|
|
2005-09-09 14:10:19 -06:00
|
|
|
This method should be either NULL (normal UNIX filesystem
|
2005-04-16 16:20:36 -06:00
|
|
|
semantics) or "generic_delete_inode" (for filesystems that do not
|
|
|
|
want to cache inodes - causing "delete_inode" to always be
|
|
|
|
called regardless of the value of i_nlink)
|
|
|
|
|
2005-09-09 14:10:19 -06:00
|
|
|
The "generic_delete_inode()" behavior is equivalent to the
|
2005-04-16 16:20:36 -06:00
|
|
|
old practice of using "force_delete" in the put_inode() case,
|
|
|
|
but does not have the races that the "force_delete()" approach
|
|
|
|
had.
|
|
|
|
|
|
|
|
delete_inode: called when the VFS wants to delete an inode
|
|
|
|
|
|
|
|
put_super: called when the VFS wishes to free the superblock
|
|
|
|
(i.e. unmount). This is called with the superblock lock held
|
|
|
|
|
|
|
|
write_super: called when the VFS superblock needs to be written to
|
|
|
|
disc. This method is optional
|
|
|
|
|
2005-09-09 14:10:19 -06:00
|
|
|
sync_fs: called when VFS is writing out all dirty data associated with
|
|
|
|
a superblock. The second parameter indicates whether the method
|
|
|
|
should wait until the write out has been completed. Optional.
|
|
|
|
|
|
|
|
write_super_lockfs: called when VFS is locking a filesystem and forcing
|
|
|
|
it into a consistent state. This function is currently used by the
|
|
|
|
Logical Volume Manager (LVM).
|
|
|
|
|
|
|
|
unlockfs: called when VFS is unlocking a filesystem and making it writable
|
|
|
|
again.
|
|
|
|
|
2005-04-16 16:20:36 -06:00
|
|
|
statfs: called when the VFS needs to get filesystem statistics. This
|
|
|
|
is called with the kernel lock held
|
|
|
|
|
|
|
|
remount_fs: called when the filesystem is remounted. This is called
|
|
|
|
with the kernel lock held
|
|
|
|
|
|
|
|
clear_inode: called then the VFS clears the inode. Optional
|
|
|
|
|
2005-09-09 14:10:19 -06:00
|
|
|
umount_begin: called when the VFS is unmounting a filesystem.
|
|
|
|
|
|
|
|
sync_inodes: called when the VFS is writing out dirty data associated with
|
|
|
|
a superblock.
|
|
|
|
|
|
|
|
show_options: called by the VFS to show mount options for /proc/<pid>/mounts.
|
|
|
|
|
|
|
|
quota_read: called by the VFS to read from filesystem quota file.
|
|
|
|
|
|
|
|
quota_write: called by the VFS to write to filesystem quota file.
|
|
|
|
|
2005-04-16 16:20:36 -06:00
|
|
|
The read_inode() method is responsible for filling in the "i_op"
|
|
|
|
field. This is a pointer to a "struct inode_operations" which
|
|
|
|
describes the methods that can be performed on individual inodes.
|
|
|
|
|
|
|
|
|
2005-09-09 14:10:19 -06:00
|
|
|
struct inode_operations
|
2005-04-16 16:20:36 -06:00
|
|
|
=======================
|
|
|
|
|
|
|
|
This describes how the VFS can manipulate an inode in your
|
2005-09-09 14:10:19 -06:00
|
|
|
filesystem. As of kernel 2.6.13, the following members are defined:
|
2005-04-16 16:20:36 -06:00
|
|
|
|
|
|
|
struct inode_operations {
|
2005-09-09 14:10:19 -06:00
|
|
|
int (*create) (struct inode *,struct dentry *,int, struct nameidata *);
|
|
|
|
struct dentry * (*lookup) (struct inode *,struct dentry *, struct nameidata *);
|
2005-04-16 16:20:36 -06:00
|
|
|
int (*link) (struct dentry *,struct inode *,struct dentry *);
|
|
|
|
int (*unlink) (struct inode *,struct dentry *);
|
|
|
|
int (*symlink) (struct inode *,struct dentry *,const char *);
|
|
|
|
int (*mkdir) (struct inode *,struct dentry *,int);
|
|
|
|
int (*rmdir) (struct inode *,struct dentry *);
|
|
|
|
int (*mknod) (struct inode *,struct dentry *,int,dev_t);
|
|
|
|
int (*rename) (struct inode *, struct dentry *,
|
|
|
|
struct inode *, struct dentry *);
|
2005-09-09 14:10:19 -06:00
|
|
|
int (*readlink) (struct dentry *, char __user *,int);
|
|
|
|
void * (*follow_link) (struct dentry *, struct nameidata *);
|
|
|
|
void (*put_link) (struct dentry *, struct nameidata *, void *);
|
2005-04-16 16:20:36 -06:00
|
|
|
void (*truncate) (struct inode *);
|
2005-09-09 14:10:19 -06:00
|
|
|
int (*permission) (struct inode *, int, struct nameidata *);
|
|
|
|
int (*setattr) (struct dentry *, struct iattr *);
|
|
|
|
int (*getattr) (struct vfsmount *mnt, struct dentry *, struct kstat *);
|
|
|
|
int (*setxattr) (struct dentry *, const char *,const void *,size_t,int);
|
|
|
|
ssize_t (*getxattr) (struct dentry *, const char *, void *, size_t);
|
|
|
|
ssize_t (*listxattr) (struct dentry *, char *, size_t);
|
|
|
|
int (*removexattr) (struct dentry *, const char *);
|
2005-04-16 16:20:36 -06:00
|
|
|
};
|
|
|
|
|
|
|
|
Again, all methods are called without any locks being held, unless
|
|
|
|
otherwise noted.
|
|
|
|
|
|
|
|
create: called by the open(2) and creat(2) system calls. Only
|
|
|
|
required if you want to support regular files. The dentry you
|
|
|
|
get should not have an inode (i.e. it should be a negative
|
|
|
|
dentry). Here you will probably call d_instantiate() with the
|
|
|
|
dentry and the newly created inode
|
|
|
|
|
|
|
|
lookup: called when the VFS needs to look up an inode in a parent
|
|
|
|
directory. The name to look for is found in the dentry. This
|
|
|
|
method must call d_add() to insert the found inode into the
|
|
|
|
dentry. The "i_count" field in the inode structure should be
|
|
|
|
incremented. If the named inode does not exist a NULL inode
|
|
|
|
should be inserted into the dentry (this is called a negative
|
|
|
|
dentry). Returning an error code from this routine must only
|
|
|
|
be done on a real error, otherwise creating inodes with system
|
|
|
|
calls like create(2), mknod(2), mkdir(2) and so on will fail.
|
|
|
|
If you wish to overload the dentry methods then you should
|
|
|
|
initialise the "d_dop" field in the dentry; this is a pointer
|
|
|
|
to a struct "dentry_operations".
|
|
|
|
This method is called with the directory inode semaphore held
|
|
|
|
|
|
|
|
link: called by the link(2) system call. Only required if you want
|
|
|
|
to support hard links. You will probably need to call
|
|
|
|
d_instantiate() just as you would in the create() method
|
|
|
|
|
|
|
|
unlink: called by the unlink(2) system call. Only required if you
|
|
|
|
want to support deleting inodes
|
|
|
|
|
|
|
|
symlink: called by the symlink(2) system call. Only required if you
|
|
|
|
want to support symlinks. You will probably need to call
|
|
|
|
d_instantiate() just as you would in the create() method
|
|
|
|
|
|
|
|
mkdir: called by the mkdir(2) system call. Only required if you want
|
|
|
|
to support creating subdirectories. You will probably need to
|
|
|
|
call d_instantiate() just as you would in the create() method
|
|
|
|
|
|
|
|
rmdir: called by the rmdir(2) system call. Only required if you want
|
|
|
|
to support deleting subdirectories
|
|
|
|
|
|
|
|
mknod: called by the mknod(2) system call to create a device (char,
|
|
|
|
block) inode or a named pipe (FIFO) or socket. Only required
|
|
|
|
if you want to support creating these types of inodes. You
|
|
|
|
will probably need to call d_instantiate() just as you would
|
|
|
|
in the create() method
|
|
|
|
|
|
|
|
readlink: called by the readlink(2) system call. Only required if
|
|
|
|
you want to support reading symbolic links
|
|
|
|
|
|
|
|
follow_link: called by the VFS to follow a symbolic link to the
|
2005-09-09 14:10:19 -06:00
|
|
|
inode it points to. Only required if you want to support
|
|
|
|
symbolic links. This function returns a void pointer cookie
|
|
|
|
that is passed to put_link().
|
|
|
|
|
|
|
|
put_link: called by the VFS to release resources allocated by
|
|
|
|
follow_link(). The cookie returned by follow_link() is passed to
|
|
|
|
to this function as the last parameter. It is used by filesystems
|
|
|
|
such as NFS where page cache is not stable (i.e. page that was
|
|
|
|
installed when the symbolic link walk started might not be in the
|
|
|
|
page cache at the end of the walk).
|
|
|
|
|
|
|
|
truncate: called by the VFS to change the size of a file. The i_size
|
|
|
|
field of the inode is set to the desired size by the VFS before
|
|
|
|
this function is called. This function is called by the truncate(2)
|
|
|
|
system call and related functionality.
|
|
|
|
|
|
|
|
permission: called by the VFS to check for access rights on a POSIX-like
|
|
|
|
filesystem.
|
|
|
|
|
|
|
|
setattr: called by the VFS to set attributes for a file. This function is
|
|
|
|
called by chmod(2) and related system calls.
|
|
|
|
|
|
|
|
getattr: called by the VFS to get attributes of a file. This function is
|
|
|
|
called by stat(2) and related system calls.
|
|
|
|
|
|
|
|
setxattr: called by the VFS to set an extended attribute for a file.
|
|
|
|
Extended attribute is a name:value pair associated with an inode. This
|
|
|
|
function is called by setxattr(2) system call.
|
|
|
|
|
|
|
|
getxattr: called by the VFS to retrieve the value of an extended attribute
|
|
|
|
name. This function is called by getxattr(2) function call.
|
|
|
|
|
|
|
|
listxattr: called by the VFS to list all extended attributes for a given
|
|
|
|
file. This function is called by listxattr(2) system call.
|
|
|
|
|
|
|
|
removexattr: called by the VFS to remove an extended attribute from a file.
|
|
|
|
This function is called by removexattr(2) system call.
|
|
|
|
|
|
|
|
|
|
|
|
struct address_space_operations
|
|
|
|
===============================
|
|
|
|
|
|
|
|
This describes how the VFS can manipulate mapping of a file to page cache in
|
|
|
|
your filesystem. As of kernel 2.6.13, the following members are defined:
|
|
|
|
|
|
|
|
struct address_space_operations {
|
|
|
|
int (*writepage)(struct page *page, struct writeback_control *wbc);
|
|
|
|
int (*readpage)(struct file *, struct page *);
|
|
|
|
int (*sync_page)(struct page *);
|
|
|
|
int (*writepages)(struct address_space *, struct writeback_control *);
|
|
|
|
int (*set_page_dirty)(struct page *page);
|
|
|
|
int (*readpages)(struct file *filp, struct address_space *mapping,
|
|
|
|
struct list_head *pages, unsigned nr_pages);
|
|
|
|
int (*prepare_write)(struct file *, struct page *, unsigned, unsigned);
|
|
|
|
int (*commit_write)(struct file *, struct page *, unsigned, unsigned);
|
|
|
|
sector_t (*bmap)(struct address_space *, sector_t);
|
|
|
|
int (*invalidatepage) (struct page *, unsigned long);
|
|
|
|
int (*releasepage) (struct page *, int);
|
|
|
|
ssize_t (*direct_IO)(int, struct kiocb *, const struct iovec *iov,
|
|
|
|
loff_t offset, unsigned long nr_segs);
|
|
|
|
struct page* (*get_xip_page)(struct address_space *, sector_t,
|
|
|
|
int);
|
|
|
|
};
|
|
|
|
|
|
|
|
writepage: called by the VM write a dirty page to backing store.
|
|
|
|
|
|
|
|
readpage: called by the VM to read a page from backing store.
|
|
|
|
|
|
|
|
sync_page: called by the VM to notify the backing store to perform all
|
|
|
|
queued I/O operations for a page. I/O operations for other pages
|
|
|
|
associated with this address_space object may also be performed.
|
|
|
|
|
|
|
|
writepages: called by the VM to write out pages associated with the
|
|
|
|
address_space object.
|
|
|
|
|
|
|
|
set_page_dirty: called by the VM to set a page dirty.
|
|
|
|
|
|
|
|
readpages: called by the VM to read pages associated with the address_space
|
|
|
|
object.
|
2005-04-16 16:20:36 -06:00
|
|
|
|
2005-09-09 14:10:19 -06:00
|
|
|
prepare_write: called by the generic write path in VM to set up a write
|
|
|
|
request for a page.
|
2005-04-16 16:20:36 -06:00
|
|
|
|
2005-09-09 14:10:19 -06:00
|
|
|
commit_write: called by the generic write path in VM to write page to
|
|
|
|
its backing store.
|
|
|
|
|
|
|
|
bmap: called by the VFS to map a logical block offset within object to
|
|
|
|
physical block number. This method is use by for the legacy FIBMAP
|
|
|
|
ioctl. Other uses are discouraged.
|
|
|
|
|
|
|
|
invalidatepage: called by the VM on truncate to disassociate a page from its
|
|
|
|
address_space mapping.
|
|
|
|
|
|
|
|
releasepage: called by the VFS to release filesystem specific metadata from
|
|
|
|
a page.
|
|
|
|
|
|
|
|
direct_IO: called by the VM for direct I/O writes and reads.
|
|
|
|
|
|
|
|
get_xip_page: called by the VM to translate a block number to a page.
|
|
|
|
The page is valid until the corresponding filesystem is unmounted.
|
|
|
|
Filesystems that want to use execute-in-place (XIP) need to implement
|
|
|
|
it. An example implementation can be found in fs/ext2/xip.c.
|
|
|
|
|
|
|
|
|
|
|
|
struct file_operations
|
2005-04-16 16:20:36 -06:00
|
|
|
======================
|
|
|
|
|
|
|
|
This describes how the VFS can manipulate an open file. As of kernel
|
2005-09-09 14:10:19 -06:00
|
|
|
2.6.13, the following members are defined:
|
2005-04-16 16:20:36 -06:00
|
|
|
|
|
|
|
struct file_operations {
|
|
|
|
loff_t (*llseek) (struct file *, loff_t, int);
|
2005-09-09 14:10:19 -06:00
|
|
|
ssize_t (*read) (struct file *, char __user *, size_t, loff_t *);
|
|
|
|
ssize_t (*aio_read) (struct kiocb *, char __user *, size_t, loff_t);
|
|
|
|
ssize_t (*write) (struct file *, const char __user *, size_t, loff_t *);
|
|
|
|
ssize_t (*aio_write) (struct kiocb *, const char __user *, size_t, loff_t);
|
2005-04-16 16:20:36 -06:00
|
|
|
int (*readdir) (struct file *, void *, filldir_t);
|
|
|
|
unsigned int (*poll) (struct file *, struct poll_table_struct *);
|
|
|
|
int (*ioctl) (struct inode *, struct file *, unsigned int, unsigned long);
|
2005-09-09 14:10:19 -06:00
|
|
|
long (*unlocked_ioctl) (struct file *, unsigned int, unsigned long);
|
|
|
|
long (*compat_ioctl) (struct file *, unsigned int, unsigned long);
|
2005-04-16 16:20:36 -06:00
|
|
|
int (*mmap) (struct file *, struct vm_area_struct *);
|
|
|
|
int (*open) (struct inode *, struct file *);
|
2005-09-09 14:10:19 -06:00
|
|
|
int (*flush) (struct file *);
|
2005-04-16 16:20:36 -06:00
|
|
|
int (*release) (struct inode *, struct file *);
|
2005-09-09 14:10:19 -06:00
|
|
|
int (*fsync) (struct file *, struct dentry *, int datasync);
|
|
|
|
int (*aio_fsync) (struct kiocb *, int datasync);
|
|
|
|
int (*fasync) (int, struct file *, int);
|
2005-04-16 16:20:36 -06:00
|
|
|
int (*lock) (struct file *, int, struct file_lock *);
|
2005-09-09 14:10:19 -06:00
|
|
|
ssize_t (*readv) (struct file *, const struct iovec *, unsigned long, loff_t *);
|
|
|
|
ssize_t (*writev) (struct file *, const struct iovec *, unsigned long, loff_t *);
|
|
|
|
ssize_t (*sendfile) (struct file *, loff_t *, size_t, read_actor_t, void *);
|
|
|
|
ssize_t (*sendpage) (struct file *, struct page *, int, size_t, loff_t *, int);
|
|
|
|
unsigned long (*get_unmapped_area)(struct file *, unsigned long, unsigned long, unsigned long, unsigned long);
|
|
|
|
int (*check_flags)(int);
|
|
|
|
int (*dir_notify)(struct file *filp, unsigned long arg);
|
|
|
|
int (*flock) (struct file *, int, struct file_lock *);
|
2005-04-16 16:20:36 -06:00
|
|
|
};
|
|
|
|
|
|
|
|
Again, all methods are called without any locks being held, unless
|
|
|
|
otherwise noted.
|
|
|
|
|
|
|
|
llseek: called when the VFS needs to move the file position index
|
|
|
|
|
|
|
|
read: called by read(2) and related system calls
|
|
|
|
|
2005-09-09 14:10:19 -06:00
|
|
|
aio_read: called by io_submit(2) and other asynchronous I/O operations
|
|
|
|
|
2005-04-16 16:20:36 -06:00
|
|
|
write: called by write(2) and related system calls
|
|
|
|
|
2005-09-09 14:10:19 -06:00
|
|
|
aio_write: called by io_submit(2) and other asynchronous I/O operations
|
|
|
|
|
2005-04-16 16:20:36 -06:00
|
|
|
readdir: called when the VFS needs to read the directory contents
|
|
|
|
|
|
|
|
poll: called by the VFS when a process wants to check if there is
|
|
|
|
activity on this file and (optionally) go to sleep until there
|
|
|
|
is activity. Called by the select(2) and poll(2) system calls
|
|
|
|
|
|
|
|
ioctl: called by the ioctl(2) system call
|
|
|
|
|
2005-09-09 14:10:19 -06:00
|
|
|
unlocked_ioctl: called by the ioctl(2) system call. Filesystems that do not
|
|
|
|
require the BKL should use this method instead of the ioctl() above.
|
|
|
|
|
|
|
|
compat_ioctl: called by the ioctl(2) system call when 32 bit system calls
|
|
|
|
are used on 64 bit kernels.
|
|
|
|
|
2005-04-16 16:20:36 -06:00
|
|
|
mmap: called by the mmap(2) system call
|
|
|
|
|
|
|
|
open: called by the VFS when an inode should be opened. When the VFS
|
2005-09-09 14:10:19 -06:00
|
|
|
opens a file, it creates a new "struct file". It then calls the
|
|
|
|
open method for the newly allocated file structure. You might
|
|
|
|
think that the open method really belongs in
|
|
|
|
"struct inode_operations", and you may be right. I think it's
|
|
|
|
done the way it is because it makes filesystems simpler to
|
|
|
|
implement. The open() method is a good place to initialize the
|
|
|
|
"private_data" member in the file structure if you want to point
|
|
|
|
to a device structure
|
|
|
|
|
|
|
|
flush: called by the close(2) system call to flush a file
|
2005-04-16 16:20:36 -06:00
|
|
|
|
|
|
|
release: called when the last reference to an open file is closed
|
|
|
|
|
|
|
|
fsync: called by the fsync(2) system call
|
|
|
|
|
|
|
|
fasync: called by the fcntl(2) system call when asynchronous
|
|
|
|
(non-blocking) mode is enabled for a file
|
|
|
|
|
2005-09-09 14:10:19 -06:00
|
|
|
lock: called by the fcntl(2) system call for F_GETLK, F_SETLK, and F_SETLKW
|
|
|
|
commands
|
|
|
|
|
|
|
|
readv: called by the readv(2) system call
|
|
|
|
|
|
|
|
writev: called by the writev(2) system call
|
|
|
|
|
|
|
|
sendfile: called by the sendfile(2) system call
|
|
|
|
|
|
|
|
get_unmapped_area: called by the mmap(2) system call
|
|
|
|
|
|
|
|
check_flags: called by the fcntl(2) system call for F_SETFL command
|
|
|
|
|
|
|
|
dir_notify: called by the fcntl(2) system call for F_NOTIFY command
|
|
|
|
|
|
|
|
flock: called by the flock(2) system call
|
|
|
|
|
2005-04-16 16:20:36 -06:00
|
|
|
Note that the file operations are implemented by the specific
|
|
|
|
filesystem in which the inode resides. When opening a device node
|
|
|
|
(character or block special) most filesystems will call special
|
|
|
|
support routines in the VFS which will locate the required device
|
|
|
|
driver information. These support routines replace the filesystem file
|
|
|
|
operations with those for the device driver, and then proceed to call
|
|
|
|
the new open() method for the file. This is how opening a device file
|
|
|
|
in the filesystem eventually ends up calling the device driver open()
|
2005-09-09 14:10:19 -06:00
|
|
|
method.
|
2005-04-16 16:20:36 -06:00
|
|
|
|
|
|
|
|
2005-09-09 14:10:19 -06:00
|
|
|
Directory Entry Cache (dcache)
|
|
|
|
==============================
|
|
|
|
|
2005-04-16 16:20:36 -06:00
|
|
|
|
|
|
|
struct dentry_operations
|
2005-09-09 14:10:19 -06:00
|
|
|
------------------------
|
2005-04-16 16:20:36 -06:00
|
|
|
|
|
|
|
This describes how a filesystem can overload the standard dentry
|
|
|
|
operations. Dentries and the dcache are the domain of the VFS and the
|
|
|
|
individual filesystem implementations. Device drivers have no business
|
|
|
|
here. These methods may be set to NULL, as they are either optional or
|
2005-09-09 14:10:19 -06:00
|
|
|
the VFS uses a default. As of kernel 2.6.13, the following members are
|
2005-04-16 16:20:36 -06:00
|
|
|
defined:
|
|
|
|
|
|
|
|
struct dentry_operations {
|
2005-09-09 14:10:19 -06:00
|
|
|
int (*d_revalidate)(struct dentry *, struct nameidata *);
|
2005-04-16 16:20:36 -06:00
|
|
|
int (*d_hash) (struct dentry *, struct qstr *);
|
|
|
|
int (*d_compare) (struct dentry *, struct qstr *, struct qstr *);
|
2005-09-09 14:10:19 -06:00
|
|
|
int (*d_delete)(struct dentry *);
|
2005-04-16 16:20:36 -06:00
|
|
|
void (*d_release)(struct dentry *);
|
|
|
|
void (*d_iput)(struct dentry *, struct inode *);
|
|
|
|
};
|
|
|
|
|
|
|
|
d_revalidate: called when the VFS needs to revalidate a dentry. This
|
|
|
|
is called whenever a name look-up finds a dentry in the
|
|
|
|
dcache. Most filesystems leave this as NULL, because all their
|
|
|
|
dentries in the dcache are valid
|
|
|
|
|
|
|
|
d_hash: called when the VFS adds a dentry to the hash table
|
|
|
|
|
|
|
|
d_compare: called when a dentry should be compared with another
|
|
|
|
|
|
|
|
d_delete: called when the last reference to a dentry is
|
|
|
|
deleted. This means no-one is using the dentry, however it is
|
|
|
|
still valid and in the dcache
|
|
|
|
|
|
|
|
d_release: called when a dentry is really deallocated
|
|
|
|
|
|
|
|
d_iput: called when a dentry loses its inode (just prior to its
|
|
|
|
being deallocated). The default when this is NULL is that the
|
|
|
|
VFS calls iput(). If you define this method, you must call
|
|
|
|
iput() yourself
|
|
|
|
|
|
|
|
Each dentry has a pointer to its parent dentry, as well as a hash list
|
|
|
|
of child dentries. Child dentries are basically like files in a
|
|
|
|
directory.
|
|
|
|
|
2005-09-09 14:10:19 -06:00
|
|
|
|
2005-04-16 16:20:36 -06:00
|
|
|
Directory Entry Cache APIs
|
|
|
|
--------------------------
|
|
|
|
|
|
|
|
There are a number of functions defined which permit a filesystem to
|
|
|
|
manipulate dentries:
|
|
|
|
|
|
|
|
dget: open a new handle for an existing dentry (this just increments
|
|
|
|
the usage count)
|
|
|
|
|
|
|
|
dput: close a handle for a dentry (decrements the usage count). If
|
|
|
|
the usage count drops to 0, the "d_delete" method is called
|
|
|
|
and the dentry is placed on the unused list if the dentry is
|
|
|
|
still in its parents hash list. Putting the dentry on the
|
|
|
|
unused list just means that if the system needs some RAM, it
|
|
|
|
goes through the unused list of dentries and deallocates them.
|
|
|
|
If the dentry has already been unhashed and the usage count
|
|
|
|
drops to 0, in this case the dentry is deallocated after the
|
|
|
|
"d_delete" method is called
|
|
|
|
|
|
|
|
d_drop: this unhashes a dentry from its parents hash list. A
|
2005-09-09 14:10:19 -06:00
|
|
|
subsequent call to dput() will deallocate the dentry if its
|
2005-04-16 16:20:36 -06:00
|
|
|
usage count drops to 0
|
|
|
|
|
|
|
|
d_delete: delete a dentry. If there are no other open references to
|
|
|
|
the dentry then the dentry is turned into a negative dentry
|
|
|
|
(the d_iput() method is called). If there are other
|
|
|
|
references, then d_drop() is called instead
|
|
|
|
|
|
|
|
d_add: add a dentry to its parents hash list and then calls
|
|
|
|
d_instantiate()
|
|
|
|
|
|
|
|
d_instantiate: add a dentry to the alias hash list for the inode and
|
|
|
|
updates the "d_inode" member. The "i_count" member in the
|
|
|
|
inode structure should be set/incremented. If the inode
|
|
|
|
pointer is NULL, the dentry is called a "negative
|
|
|
|
dentry". This function is commonly called when an inode is
|
|
|
|
created for an existing negative dentry
|
|
|
|
|
|
|
|
d_lookup: look up a dentry given its parent and path name component
|
|
|
|
It looks up the child of that given name from the dcache
|
|
|
|
hash table. If it is found, the reference count is incremented
|
|
|
|
and the dentry is returned. The caller must use d_put()
|
|
|
|
to free the dentry when it finishes using it.
|
|
|
|
|
|
|
|
|
|
|
|
RCU-based dcache locking model
|
|
|
|
------------------------------
|
|
|
|
|
|
|
|
On many workloads, the most common operation on dcache is
|
|
|
|
to look up a dentry, given a parent dentry and the name
|
|
|
|
of the child. Typically, for every open(), stat() etc.,
|
|
|
|
the dentry corresponding to the pathname will be looked
|
|
|
|
up by walking the tree starting with the first component
|
|
|
|
of the pathname and using that dentry along with the next
|
|
|
|
component to look up the next level and so on. Since it
|
|
|
|
is a frequent operation for workloads like multiuser
|
2005-09-09 14:10:19 -06:00
|
|
|
environments and web servers, it is important to optimize
|
2005-04-16 16:20:36 -06:00
|
|
|
this path.
|
|
|
|
|
|
|
|
Prior to 2.5.10, dcache_lock was acquired in d_lookup and thus
|
|
|
|
in every component during path look-up. Since 2.5.10 onwards,
|
2005-09-09 14:10:19 -06:00
|
|
|
fast-walk algorithm changed this by holding the dcache_lock
|
2005-04-16 16:20:36 -06:00
|
|
|
at the beginning and walking as many cached path component
|
2005-09-09 14:10:19 -06:00
|
|
|
dentries as possible. This significantly decreases the number
|
2005-04-16 16:20:36 -06:00
|
|
|
of acquisition of dcache_lock. However it also increases the
|
2005-09-09 14:10:19 -06:00
|
|
|
lock hold time significantly and affects performance in large
|
2005-04-16 16:20:36 -06:00
|
|
|
SMP machines. Since 2.5.62 kernel, dcache has been using
|
|
|
|
a new locking model that uses RCU to make dcache look-up
|
|
|
|
lock-free.
|
|
|
|
|
|
|
|
The current dcache locking model is not very different from the existing
|
|
|
|
dcache locking model. Prior to 2.5.62 kernel, dcache_lock
|
|
|
|
protected the hash chain, d_child, d_alias, d_lru lists as well
|
|
|
|
as d_inode and several other things like mount look-up. RCU-based
|
|
|
|
changes affect only the way the hash chain is protected. For everything
|
|
|
|
else the dcache_lock must be taken for both traversing as well as
|
2005-09-09 14:10:19 -06:00
|
|
|
updating. The hash chain updates too take the dcache_lock.
|
2005-04-16 16:20:36 -06:00
|
|
|
The significant change is the way d_lookup traverses the hash chain,
|
|
|
|
it doesn't acquire the dcache_lock for this and rely on RCU to
|
|
|
|
ensure that the dentry has not been *freed*.
|
|
|
|
|
|
|
|
|
|
|
|
Dcache locking details
|
|
|
|
----------------------
|
2005-09-09 14:10:19 -06:00
|
|
|
|
2005-04-16 16:20:36 -06:00
|
|
|
For many multi-user workloads, open() and stat() on files are
|
|
|
|
very frequently occurring operations. Both involve walking
|
|
|
|
of path names to find the dentry corresponding to the
|
|
|
|
concerned file. In 2.4 kernel, dcache_lock was held
|
|
|
|
during look-up of each path component. Contention and
|
2005-09-09 14:10:19 -06:00
|
|
|
cache-line bouncing of this global lock caused significant
|
2005-04-16 16:20:36 -06:00
|
|
|
scalability problems. With the introduction of RCU
|
2005-09-09 14:10:19 -06:00
|
|
|
in Linux kernel, this was worked around by making
|
2005-04-16 16:20:36 -06:00
|
|
|
the look-up of path components during path walking lock-free.
|
|
|
|
|
|
|
|
|
|
|
|
Safe lock-free look-up of dcache hash table
|
|
|
|
===========================================
|
|
|
|
|
|
|
|
Dcache is a complex data structure with the hash table entries
|
|
|
|
also linked together in other lists. In 2.4 kernel, dcache_lock
|
|
|
|
protected all the lists. We applied RCU only on hash chain
|
|
|
|
walking. The rest of the lists are still protected by dcache_lock.
|
|
|
|
Some of the important changes are :
|
|
|
|
|
|
|
|
1. The deletion from hash chain is done using hlist_del_rcu() macro which
|
|
|
|
doesn't initialize next pointer of the deleted dentry and this
|
|
|
|
allows us to walk safely lock-free while a deletion is happening.
|
|
|
|
|
|
|
|
2. Insertion of a dentry into the hash table is done using
|
|
|
|
hlist_add_head_rcu() which take care of ordering the writes -
|
|
|
|
the writes to the dentry must be visible before the dentry
|
2005-09-09 14:10:19 -06:00
|
|
|
is inserted. This works in conjunction with hlist_for_each_rcu()
|
2005-04-16 16:20:36 -06:00
|
|
|
while walking the hash chain. The only requirement is that
|
|
|
|
all initialization to the dentry must be done before hlist_add_head_rcu()
|
|
|
|
since we don't have dcache_lock protection while traversing
|
|
|
|
the hash chain. This isn't different from the existing code.
|
|
|
|
|
|
|
|
3. The dentry looked up without holding dcache_lock by cannot be
|
|
|
|
returned for walking if it is unhashed. It then may have a NULL
|
|
|
|
d_inode or other bogosity since RCU doesn't protect the other
|
|
|
|
fields in the dentry. We therefore use a flag DCACHE_UNHASHED to
|
|
|
|
indicate unhashed dentries and use this in conjunction with a
|
|
|
|
per-dentry lock (d_lock). Once looked up without the dcache_lock,
|
|
|
|
we acquire the per-dentry lock (d_lock) and check if the
|
|
|
|
dentry is unhashed. If so, the look-up is failed. If not, the
|
|
|
|
reference count of the dentry is increased and the dentry is returned.
|
|
|
|
|
|
|
|
4. Once a dentry is looked up, it must be ensured during the path
|
|
|
|
walk for that component it doesn't go away. In pre-2.5.10 code,
|
|
|
|
this was done holding a reference to the dentry. dcache_rcu does
|
|
|
|
the same. In some sense, dcache_rcu path walking looks like
|
|
|
|
the pre-2.5.10 version.
|
|
|
|
|
2005-09-09 14:10:19 -06:00
|
|
|
5. All dentry hash chain updates must take the dcache_lock as well as
|
2005-04-16 16:20:36 -06:00
|
|
|
the per-dentry lock in that order. dput() does this to ensure
|
|
|
|
that a dentry that has just been looked up in another CPU
|
|
|
|
doesn't get deleted before dget() can be done on it.
|
|
|
|
|
|
|
|
6. There are several ways to do reference counting of RCU protected
|
|
|
|
objects. One such example is in ipv4 route cache where
|
|
|
|
deferred freeing (using call_rcu()) is done as soon as
|
|
|
|
the reference count goes to zero. This cannot be done in
|
|
|
|
the case of dentries because tearing down of dentries
|
|
|
|
require blocking (dentry_iput()) which isn't supported from
|
|
|
|
RCU callbacks. Instead, tearing down of dentries happen
|
|
|
|
synchronously in dput(), but actual freeing happens later
|
|
|
|
when RCU grace period is over. This allows safe lock-free
|
|
|
|
walking of the hash chains, but a matched dentry may have
|
|
|
|
been partially torn down. The checking of DCACHE_UNHASHED
|
|
|
|
flag with d_lock held detects such dentries and prevents
|
|
|
|
them from being returned from look-up.
|
|
|
|
|
|
|
|
|
|
|
|
Maintaining POSIX rename semantics
|
|
|
|
==================================
|
|
|
|
|
|
|
|
Since look-up of dentries is lock-free, it can race against
|
|
|
|
a concurrent rename operation. For example, during rename
|
|
|
|
of file A to B, look-up of either A or B must succeed.
|
|
|
|
So, if look-up of B happens after A has been removed from the
|
|
|
|
hash chain but not added to the new hash chain, it may fail.
|
|
|
|
Also, a comparison while the name is being written concurrently
|
|
|
|
by a rename may result in false positive matches violating
|
|
|
|
rename semantics. Issues related to race with rename are
|
|
|
|
handled as described below :
|
|
|
|
|
|
|
|
1. Look-up can be done in two ways - d_lookup() which is safe
|
|
|
|
from simultaneous renames and __d_lookup() which is not.
|
|
|
|
If __d_lookup() fails, it must be followed up by a d_lookup()
|
|
|
|
to correctly determine whether a dentry is in the hash table
|
|
|
|
or not. d_lookup() protects look-ups using a sequence
|
|
|
|
lock (rename_lock).
|
|
|
|
|
|
|
|
2. The name associated with a dentry (d_name) may be changed if
|
|
|
|
a rename is allowed to happen simultaneously. To avoid memcmp()
|
|
|
|
in __d_lookup() go out of bounds due to a rename and false
|
|
|
|
positive comparison, the name comparison is done while holding the
|
|
|
|
per-dentry lock. This prevents concurrent renames during this
|
|
|
|
operation.
|
|
|
|
|
|
|
|
3. Hash table walking during look-up may move to a different bucket as
|
|
|
|
the current dentry is moved to a different bucket due to rename.
|
|
|
|
But we use hlists in dcache hash table and they are null-terminated.
|
|
|
|
So, even if a dentry moves to a different bucket, hash chain
|
|
|
|
walk will terminate. [with a list_head list, it may not since
|
|
|
|
termination is when the list_head in the original bucket is reached].
|
|
|
|
Since we redo the d_parent check and compare name while holding
|
|
|
|
d_lock, lock-free look-up will not race against d_move().
|
|
|
|
|
2005-09-09 14:10:19 -06:00
|
|
|
4. There can be a theoretical race when a dentry keeps coming back
|
2005-04-16 16:20:36 -06:00
|
|
|
to original bucket due to double moves. Due to this look-up may
|
|
|
|
consider that it has never moved and can end up in a infinite loop.
|
2005-09-09 14:10:19 -06:00
|
|
|
But this is not any worse that theoretical livelocks we already
|
2005-04-16 16:20:36 -06:00
|
|
|
have in the kernel.
|
|
|
|
|
|
|
|
|
|
|
|
Important guidelines for filesystem developers related to dcache_rcu
|
|
|
|
====================================================================
|
|
|
|
|
|
|
|
1. Existing dcache interfaces (pre-2.5.62) exported to filesystem
|
|
|
|
don't change. Only dcache internal implementation changes. However
|
|
|
|
filesystems *must not* delete from the dentry hash chains directly
|
|
|
|
using the list macros like allowed earlier. They must use dcache
|
|
|
|
APIs like d_drop() or __d_drop() depending on the situation.
|
|
|
|
|
|
|
|
2. d_flags is now protected by a per-dentry lock (d_lock). All
|
|
|
|
access to d_flags must be protected by it.
|
|
|
|
|
|
|
|
3. For a hashed dentry, checking of d_count needs to be protected
|
|
|
|
by d_lock.
|
|
|
|
|
|
|
|
|
|
|
|
Papers and other documentation on dcache locking
|
|
|
|
================================================
|
|
|
|
|
|
|
|
1. Scaling dcache with RCU (http://linuxjournal.com/article.php?sid=7124).
|
|
|
|
|
|
|
|
2. http://lse.sourceforge.net/locking/dcache/dcache.html
|