SlideShare una empresa de Scribd logo
1 de 113
Linux device driver
Introduction
• Each module is made up of object code (not linked into a complete executable) that can be dynamically linked to the
running kernel by the insmod program and can be unlinked by the rmmod program.
• Device Drivers are of three types:
• Char device driver
• Block device driver
• Network device driver
• Programmer can implement a driver which can be hybrid of above three.
• Types od devices :
• Character devices
• A character (char) device is one that can be accessed as a stream of bytes (like a file)
• a char driver is in charge of implementing this behavior.
• Such a driver usually implements at least the open, close, read, and write system calls.
• Example : /dev/consoles and /dev/ttyS0
• Block devices
• A block device is a device (e.g., a disk) that can host a filesystem
• a block device can only handle I/O operations that transfer one or more whole blocks, which are usually 512 bytes
(or a larger power of two) bytes in length
• Eg : Memory card
Introduction
• Network devices
• Network interface cards
• ASICs
• Network processors
• USB devices
• Every USB device is driven by a USB module that works with the USB subsystem, but the device itself shows up in
the system as a char device (a USB serial port, say), a block device (a USB memory card reader), or a network device
(a USB Ethernet interface).
Kernel programming basics
• The exit fn of modules should release all resources else the resources would uselessly stay reserved in kernel until system
reboot.
• The only functions KM can call are the ones exported by the kernel; there are no libraries to link to
• Only functions that are actually part of the kernel itself may be used in kernel modules.
• Unix transfers execution from user space to kernel space whenever an application issues a system call or is suspended by a
hardware interrupt
• Linux systems run multiple processes, more than one of which can be trying to use your driver at the same time
• Linux kernel code, including driver code, must be reentrant—it must be capable of running in more than one context at the
same time.
• If you do not write your driver code with concurrency in mind, it will be subject to catastrophic failures that can be
exceedingly difficult to debug.
• Kernel modules don’t execute sequentially, they are event driver
• Kernel code (that is being executed in the context of the process) can refer to the current process by accessing the global item
current, defined in <asm/current.h>, which yields a pointer to struct task_struct, defined by <linux/sched.h>. The current
pointer refers to the process that is currently executing [21, LDD3]
• A device driver can just include <linux/sched.h> and refer to the current process. Example below prints the process name and
pid in kernel world.
• printk doesn’t flush until a trailing newline is provided
• Q : Is current a global variable ?
printk(KERN_INFO "The process is "%s" (pid %i)n", current->comm, current->pid);
Kernel programming basics
• User space Applications are laid out in virtual memory with a very large stack area.
• The kernel, instead, has a very small stack; it can be as small as a single, 4096-byte page. So, be careful to not write recursive
calls in driver code Or refrain from creating auto large stack variables.
• Kernel code cannot do floating point arithmetic. (Why ?)
Loading and unloading modules
• Insmod – load the KM into kernel. How it works in detail ? [25 LLD3]
sys_init_module() - system call, allocates kernel memory to hold the module using vmalloc
|
copies module text into memory region
|
resolves kernel references in the module via the kernel symbol table (linking)
|
calls the module’s initialization function
• modprobe – loads the kernel modules + all other kernel modules whose fns are referenced [25 LLD3]
• modprobe looks only in the standard installed module directories
• rmmod - remove KM from the kernel
• module removal fails if the kernel believes that the module is still in use. For eg, application may still using KM.
Kernel programming basics
• We can configure kernel to allow “forced” removal of KM. Things may gone wrong however.
• modprobe command can sometimes replace several invocations of insmod
• lsmod - The lsmod program produces a list of the modules currently loaded in the kernel.
• lsmod works by reading the /proc/modules virtual file.
• Kernel Symbol table
• The table contains the addresses of global kernel items—functions and variables
• When a module is loaded, any symbol exported by the module becomes part of the kernel symbol table
• You need to export symbols, however, whenever other modules may benefit from using them.
• f your module needs to export symbols for other modules to use, the following macros should be used :
• Either of the above macros makes the given symbol available outside the module.
Vermagic.o
An object file from the kernel source directory that describes the environment a module was built for.
EXPORT_SYMBOL (name)
EXPORTSYMBOL_GPL(name)
Char Device Driver
Char device driver
• Char devices are accessed through names in the filesystem in /dev dir, starting with letter “c”
• Block devices, on the other hand, are represented as files whose names starting with letter “b”
Major number and Minor Number
• ls –l in /dev dir shows two numbers separated by comma. These are called major and minor no respectively.
• Major number identifies the driver associated with the device.
• Modern Linux kernels allow multiple drivers to share major numbers
• The minor number is used by the kernel to determine exactly which device is being referred to
• Minor no identifies the device being operated by driver.
Major number and Minor Number Macros
• dev_t (defined in <linux/types.h>) is a 32-bit quantity with 12 bits set aside for the major number and 20 for the minor
number.
• MAJOR(dev_t dev) – retrieve the major no
• MINOR(dev_t dev) – retrieve the minor no.
• Dev_t dev = MKDEV(int major, int minor) – creating a dev_t object out of major and minor nos.
Allocating and freeing device numbers
• Declared in <linux/fs.h>
• Here, first is the beginning device number (Minor number) of the range you would like to allocate.
int register_chrdev_region(dev_t first, unsigned int count, char *name);
• The minor number portion of first is often 0, but there is no requirement to that effect.
• count is the total number of contiguous device numbers you are requesting
• Finally, name is the name of the device that should be associated with this number range
• it will appear in /proc/devices and sysfs.
• Return value is 0 if success, else negative error code.
• register_chrdev_region works well if you know ahead of time exactly which device numbers you want.
• however, you will not know which major numbers your device will use, hence there is a API to dynamically allocate a
major and minor no to your driver and devices. The API is :
• dev is an output-only parameter that will, on successful completion, hold the first number in your allocated range.
• firstminor should be the requested first minor number to use; it is usually 0.
• The count and name parameters work like those given to request_chrdev_region.
Freeing the Major number and Minor Number
• Regardless of how you allocate your device numbers, you should free them when they are no longer in use
• Device numbers are freed with :
• The usual place to call unregister_chrdev_region would be in your module’s cleanup function
int alloc_chrdev_region(dev_t *dev, unsigned int firstminor,
unsigned int count, char *name);
void unregister_chrdev_region(dev_t first, unsigned int count)
Dynamic allocation of Major and Minor numbers
• Some major device numbers are statically assigned to the most common devices
• As a driver writer, you have a choice:
• you can simply pick a number that appears to be unused, or
• you can allocate major numbers in a dynamic manner
• Picking a number may work as long as the only user of your driver is you;
• once your driver is more widely deployed, a randomly picked major number will lead to conflicts and trouble
• Recommended to use dynamic allocation to obtain your major device number
• To statically create a device in /dev dir for a device :
mknod /dev/<device_name> c <major no> <minor no>
Eg : mknod /dev/scull0 c 3 4
Below is the code snippet to show that user can chose to assign major and minor nos to driver by specifying
the command line arg with insmod (scull_major)
Device file
• register_chrdev_region/alloc_chrdev_region() create a entry in /proc/devices file, It just tells that there exists a
driver in kernel which is registered to perform I/O on a device file with major number.
• For your driver to be fully operative, device file representing the hw should be created in /dev dir. The above two
fn calls alone are not sufficient to create this entry in /dev dir
• On unix, each piece of hardware is represented by a file located in /dev named a device file which provides the
means to communicate with the hardware.
• The device driver provides the communication on behalf of a user program
• Manual Method of creating a device file in /dev
• Once the driver is insmod-ed in kernel, you can manually create device file in /dev using mknod command
• Use simple rm to remove the device file.
• Eg : mknod /dev/myscull c 12 2, where 12 is major number, 2 is minor number
• rm /dev/myscull
• A device node created by mknod is just a file that contains a device major and minor number. When you
access that file the first time, Linux looks for a driver that advertises that major/minor and loads it. Your
driver then handles all I/O with that file.
Device file
[root@localhost dev]# cat /proc/devices
Character devices:
1 mem
4 tty
4 ttyS
5 /dev/tty
5 /dev/console
5 /dev/ptmx
10 misc
21 sg
128 ptm
136 pts
250 myscull << tells that major number 250 has been reserved to be acted upon by the driver name myscull
251 uio
252 bsg
253 ptp
254 pps
struct file_Operations
structure
Dynamic allocation of Major and Minor numbers
• So far, we have reserved some device numbers for our use – Major no to driver, and minor no to device
• We need to connect of our driver’s operations to these numbers
• We achieve this using file_operations structure defined in defined in <linux/fs.h>
• This structure is a collection of function pointers
• We need to register our device specific fns to these function pointers to perform device specific operations.
• Device is represented by struct file structure in kernel
• Listing of structure on next slide is taken from kernel 4.1.x
Struct file_operations
struct file_operations {
struct module *owner;
>> The first file_operations field is not an operation at all; it is a pointer to the module that “owns” the structure. This field is used to prevent the module from
being unloaded while its operations are in use. Almost all the time, it is simply initialized to THIS_MODULE, a macro defined in <linux/module.h>.
loff_t (*llseek) (struct file *, loff_t, int);
>>The llseek method is used to change the current read/write position in a file, and the new position is returned as a (positive) return value. The loff_t parameter is a “long
offset” and is at least 64 bits wide even on 32-bit platforms. Errors are signaled by a negative return value. If this function pointer is NULL, seek calls will modify the position counter in the
file structure (described in the section “The file Structure”) in potentially unpredictable ways.
ssize_t (*read) (struct file *, char __user *, size_t, loff_t *);
>>Used to retrieve data from the device. A null pointer in this position causes the read system call to fail with -EINVAL (“Invalid argument”). A nonnegative return value
represents the number of bytes successfully read (the return value is a “signed size” type, usually the native integer type for the target platform).
ssize_t (*write) (struct file *, const char __user *, size_t, loff_t *);
>>Sends data to the device. If NULL, -EINVAL is returned to the program calling the write system call. The return value, if nonnegative, represents the number of bytes
successfully written.
ssize_t (*read_iter) (struct kiocb *, struct iov_iter *); >> probably similar to fread() page 66 LDD3
ssize_t (*write_iter) (struct kiocb *, struct iov_iter *); >> probably similar to vector version of read/write, page 69
int (*iterate) (struct file *, struct dir_context *);
unsigned int (*poll) (struct file *, struct poll_table_struct *);
>> The poll method is the back end of three system calls: poll, epoll, and select, all of which are used to query whether a read or write to one or more file descriptors
would block. The poll method should return a bit mask indicating whether nonblocking reads or writes are possible, and, possibly, provide the kernel with information that can be used to
put the calling process to sleep until I/O becomes possible. If a driver leaves its poll method NULL, the device is assumed to be both readable and writable without blocking.
long (*unlocked_ioctl) (struct file *, unsigned int, unsigned long);
long (*compat_ioctl) (struct file *, unsigned int, unsigned long);
>>The ioctl system call offers a way to issue device-specific commands (such as formatting a track of a floppy disk, which is neither reading nor writing). Additionally, a few
ioctl commands are recognized by the kernel without referring to the fops table. If the device doesn’t provide an ioctl method, the system call returns an error for any request that isn’t
predefined (-ENOTTY, “No such ioctl for device”).
int (*mmap) (struct file *, struct vm_area_struct *);
>> mmap is used to request a mapping of device memory to a process’s address space. If this method is NULL, the mmap system call returns -ENODEV.
Struct file_operations
int (*mremap)(struct file *, struct vm_area_struct *);
int (*open) (struct inode *, struct file *);
>> Though this is always the first operation performed on the device file, the driver is not required to declare a corresponding method. If this entry is NULL, opening the
device always succeeds, but your driver isn’t notified.
int (*flush) (struct file *, fl_owner_t id);
>> The flush operation is invoked when a process closes its copy of a file descriptor for a device; it should execute (and wait for) any outstanding operations on the device.
This must not be confused with the fsync operation requested by user programs. If flush is NULL, the kernel simply ignores the user application request.
int (*release) (struct inode *, struct file *);
>> This operation is invoked when the file structure is being released. Like open, release can be NULL.*
int (*fsync) (struct file *, loff_t, loff_t, int datasync);
>> This method is the back end of the fsync system call, which a user calls to flush any pending data. If this pointer is NULL, the system call returns –EINVAL. How it Is
different from fflush() ?
int (*aio_fsync) (struct kiocb *, int datasync);
>> This is the asynchronous version of the fsync method.
int (*fasync) (int, struct file *, int);
>> This operation is used to notify the device of a change in its FASYNC flag. Asynchronous notification is an advanced topic and is described in Chapter 6 [LDD3]. The field
can be NULL if the driver doesn’t support asynchronous notification.
int (*lock) (struct file *, int, struct file_lock *);
ssize_t (*sendpage) (struct file *, struct page *, int, size_t, loff_t *, int);
>> sendpage is the other half of sendfile; it is called by the kernel to send data, one page at a time, to the corresponding file. Device drivers do not usually implement
sendpage.
unsigned long (*get_unmapped_area)(struct file *, unsigned long, unsigned long, unsigned long, unsigned long);
>> The purpose of this method is to find a suitable location in the process’s address space to map in a memory segment on the underlying device. This task is normally
performed by the memory management code; this method exists to allow drivers to enforce any alignment requirements a particular device may have. Most drivers can leave this
method NULL.
Struct file_operations
int (*check_flags)(int);
int (*flock) (struct file *, int, struct file_lock *);
ssize_t (*splice_write)(struct pipe_inode_info *, struct file *, loff_t *, size_t, unsigned int);
ssize_t (*splice_read)(struct file *, loff_t *, struct pipe_inode_info *, size_t, unsigned int);
int (*setlease)(struct file *, long, struct file_lock **, void **);
long (*fallocate)(struct file *file, int mode, loff_t offset,
loff_t len);
void (*show_fdinfo)(struct seq_file *m, struct file *f);
#ifndef CONFIG_MMU
unsigned (*mmap_capabilities)(struct file *);
#endif
};
struct file structure
Struct file
• struct file, defined in <linux/fs.h>, is the second most important data structure used in device drivers.
• The file structure represents an open file. Only the first open() system call creates this structure in kernel.
• Fork()/dup() do not create this structure in kernel, only open() does that. They just increment the counter in the existing
structure. If n process calls open, in kernel n struct *file will be created, i-node stays unique and one.
• It is not specific to device drivers; every open file in the system has an associated struct file in kernel space.
• It is created by the kernel on open and is passed to any function that operates on the file, until the last close.
• After all instances of the file are closed, the kernel releases the data structure
• Details on fields of structure is underneath.
• drivers never create file structures; they only access structures created elsewhere.
• file structure represents an open file descriptor, and not a device (file)
struct file {
struct list_head f_list;
struct dentry *f_dentry;
>> The directory entry (dentry) structure associated with the file. Device driver writers normally need not concern themselves with dentry structures, other than to
access the inode structure as filp->f_dentry->d_inode.
struct vfsmount *f_vfsmnt;
struct file_operations *f_op;
>> The operations associated with the file. The value in filp->f_op is never saved by the kernel for later reference; this means that you can change the file operations
associated with your file, and the new methods will be effective after you return to the caller. The kernel assigns the pointer as part of its implementation of open.
atomic_t f_count;
unsigned int f_flags;
>> These are the file flags, such as O_RDONLY, O_NONBLOCK, and O_SYNC. A driver should check the O_NONBLOCK flag to see if nonblocking operation has been
requested , the other flags are seldom used. In particular, read/write permission should be checked using f_mode rather than f_flags. All the flags are defined in the header <linux/fcntl.h>
Struct file
mode_t f_mode;
>> The file mode identifies the file as either readable or writable (or both), by means of the bits FMODE_READ and FMODE_WRITE. You might want to check this field for
read/write permission in your open or ioctl function, but you don’t need to check permissions for read and write, because the kernel checks before invoking your method. An attempt to
read or write when the file has not been opened for that type of access is rejected without the driver even knowing about it.
Int f_error;
loff_t f_pos;
>> The current reading or writing position. loff_t is a 64-bit value on all platforms (long long in gcc terminology). The driver can read this value if it needs to know the
current position in the file but should not normally change it; read and write should update a position using the pointer they receive as the last argument instead of acting on filp->f_pos
directly. The one exception to this rule is in the llseek method, the purpose of which is to change the file position.
struct fown_struct f_owner;
unsigned int f_uid, f_gid;
struct file_ra_state f_ra;
unsigned long f_version;
void *f_security;
/* needed for tty driver, and maybe others */
void *private_data;
>> The open system call sets this pointer to NULL before calling the open method for the driver. You are free to make its own use of the field or to ignore it; you can
use the field to point to allocated data, but then you must remember to free that memory in the release method before the file structure is destroyed by the kernel. private_data is a
useful resource for preserving state information across system calls and is used by most of our sample modules.
#ifdef CONFIG_EPOLL
/* Used by fs/eventpoll.c to link all the hooks to this file */
struct list_head f_ep_links;
spinlock_t f_ep_lock;
#endif /* #ifdef CONFIG_EPOLL */
struct address_space *f_mapping;
};
struct inode structure
struct inode
• Inode basics in general
• An Inode is a data structure that stores the following information about a file in general
• Size of file
• Device ID
• User ID of the file
• Group ID of the file
• The file mode information and access privileges for owner, group and others
• File protection flags
• The timestamps for file creation, modification etc
• link counter to determine the number of hard links Pointers to the blocks storing file’s contents
Pls Note that the name of the file represented by inode is not stored in Inodes.
Kernel maintains a table called inode table which stores the mapping of file name with its corresponding inode
number
The reason for separating out file name from the other information related to same file is for maintaining hard-
links to files. This means that once all the other information is separated out from the file name then we can have
various file names which point to same Inode.
http://www.thegeekstuff.com/2012/01/linux-inodes/
struct inode
• The inode structure is used by the kernel internally to represent files.
• It is different from the file structure that represents an open file descriptor
• There can be numerous file structures representing multiple open descriptors on a single file (device), but they all point
to a single inode structure.
• space for Inodes is allocated when the operating system or a new file system is installed and when it does its initial
structuring – register_chrdev_region()/alloc_chrdev_region()
• As a general rule, only two fields of this structure are of interest for writing driver code
• dev_t i_rdev - For inodes that represent device files, this field contains the actual device number.
• struct cdev *i_cdev - struct cdev is the kernel’s internal structure that represents char devices; this field contains a
pointer to that structure when the inode refers to a char device file
• Two macros that can be used to obtain the major and minor number from an inode. These macros should be used
instead of manipulating i_rdev directly.
• unsigned int iminor(struct inode *inode);
• unsigned int imajor(struct inode *inode);
• Display inode number :
• ls –i
• Df –i
• Stat <file name>
struct cdev structure
struct cdev and Character device registration
• The kernel uses structures of type struct cdev to represent char devices internally.
• Before the kernel invokes your device’s operations, you must allocate and register one or more of these structures
• your code should include <linux/cdev.h>, where the structure and its associated helper functions are defined
struct cdev {
struct kobject kobj;
struct module *owner; // set to THIS_MODULE
struct file_operations *ops;
struct list_head list;
dev_t dev;
unsigned int count;
};
• Two way of allocating and initializing struct cdev structures:
• struct cdev *my_cdev = cdev_alloc( );
my_cdev->ops = &my_fops;
• void cdev_init(struct cdev *cdev, struct file_operations *fops);
• Once the cdev structure is set up, the final step is to tell the kernel about it with a call to:
• int cdev_add(struct cdev *dev, dev_t num, unsigned int count);
Here, dev is the cdev structure, num is the first device number to which this device responds, and count is the number of device numbers that should be
associated with the device.
Struct cdev and Character device registration
• There are a couple of important things to keep in mind when using cdev_add.
• The first is that this call can fail. If it returns a negative error code, your device has not been added to the system. It
almost always succeeds
• however, and that brings up the other point: as soon as cdev_add returns, your device is “live” and its operations can be
called by the kernel.
• You should not call cdev_add until your driver is completely ready to handle operations on the device.
• cdev_add() is used to make an association of the ‘struct file_operations *’ and the major, minor number region
De-Registration
• To remove a char device from the system, call:
• void cdev_del(struct cdev *dev);
Steps to register a character device driver with kernel
1. register_chrdev_region/alloc_chrdev_region
• tell the kernel to reserve the major and minor number range and driver name to act on device possessing these
major/minor numbers
• This fn creates entry in of the driver in /proc/devices file, but not in /dev dir.
• This fn is just for the purpose of reserving major/minor numbers in kernel.
2. create a struct file_operations structure and register device specific callbacks
3. cdev_init(struct cdev *, struct file_operations *)
• Initialize character device driver structure with its operations
4. Bind the file operations with the char device
• cdev.ops = &fops;
5. Add the character device file to the kernel along with major and minor number. This will create the struct inode and struct file
objects for the device in the kernel. However, you device would still not be created in /dev dir.
cdev_add (struct cdev *, dev_t , 1);
6. use mknod command to manually create the device file in /dev dir.
7. For dynamic creation of device file in /dev dir … ??
Steps to de-register a character device driver with kernel
Association Diagram among kernel structures
struct cdev *
struct inode *
struct file_operations *
struct file *
inode->i_cdev = cdev
filep->private_data = cdev
cdev->ops = file_operationsHolds the info abt files
Represents char device internally in linux
Represents open file descriptor in kernel space
Filep->f_op
Represents set of opn to be performed on a file
dev_t devcdev->dev = dev
Open System Call
Close System Call
Open Vs Close System calls on a kernel object
1. When an Open() system call is called on a kernel object (device file) , kernel creates an open file descriptor structure : struct
file *
2. Always, a new struct file * is allocated in kernel every time an open() sys call is issued on a device file.
3. Objective is, for each open(), there should one and only one release method invocation in driver.
4. If a process has issues n open() sys calls on a device file/kernel object, n struct file structures are created in kernel.
5. Now, if a process is fork()/dup(), no new struct file are created in kernel. Only the reference count of existing struct file
structures are incremented. This reference count denotes the number of processes sharing the file descriptor.
6. When close() is issued, reference count of struct file is decremented. If it reaches 0, then release method is invoked in driver.
7. This is how it is ensured that for each open(), there is only one and one release() invocation.
Open Vs Close System calls on a kernel object
char __user *buff
1. In kernel, char __ user * is a pointer to the virtual address of the process. It means, we can access the process virtual address
directly in kernel space.
2. We should not directly dereference the user space address in kernel code for the following reasons:
1. The current page of process may not be the resident in main memory when kernel code attempts to access that user
space address, leading to page fault. Kernel code should not generate page faults, if it does, it results in process
termination in the context of which the kernel system call was made.
2. The pointer in question has been supplied by a user program, which could be buggy or malicious. If your driver ever
blindly dereferences a user-supplied pointer, it provides an open doorway allowing a user-space program to access or
overwrite memory anywhere in the system. If you do not wish to be responsible for compromising the security of your
users’ systems, you cannot ever dereference a user-space pointer directly.
3. Obviously, your driver must be able to access the user-space buffer in order to get its job done. This access must always be
performed by special, kernel-supplied below functions, however, in order to be safe. These fns are defined in
<linux/uaccess.h>
Note : copy_from_user/copy_to_user are a preemptive call.
unsigned long copy_to_user(void __user
*to, const void *from, unsigned long
count);
unsigned long copy_from_user(void *to,
const void __user *from,
unsigned long count);
char __user *buff
4. Although these functions behave like normal memcpy functions, a little extra care must be used when accessing user space
from kernel code.
• The user pages being addressed might not be currently present in memory, and the virtual memory subsystem can put
the process to sleep while the page is being transferred into place.
• This happens, for example, when the page must be retrieved from swap space. It means, driver is trying to access the
virtual address of the process which may be in sleep state (due to page fault). This in turn put driver to momentarily
sleep.
• The net result for the driver writer is that any function that accesses user space must be reentrant, must be able to
execute concurrently with other driver functions, and, in particular, must be in a position where it can legally sleep.
• The role of the two functions is not limited to copying data to and from user-space
• they also check whether the user space pointer is valid. If the pointer is invalid, no copy is performed;
• if an invalid address is encountered during the copy, on the other hand, only part of the data is copied.
• In both cases, the return value is the amount of memory still to be copied.
• Be careful, if you do not check a user-space pointer that you pass to these functions, then you can create kernel
crashes and/or security holes.
• More detail on this in chapter 5 and 6 of LDD3 book.
Operations on user space address (char __user *buff) in kernel land
access_ok()
• Page 142 , LDD3 …
put_user(datum, ptr)
get_user(local, ptr)
• page 143, LDD3
read System Call
• The return value for read is interpreted by the calling application program:
• If the value equals the count argument passed to the read system call, the requested number of bytes has been
transferred. This is the optimal case.
• If the value is positive, but smaller than count, only part of the data has been transferred. This may happen for a number
of reasons, depending on the device. Most often, the application program retries the read.
• if you read using the fread function, the library function reissues the system call until completion of the requested data
transfer. In kernel, the counter part of fread is read_iter()
• If the value is 0, end-of-file was reached (and no data was read).
• A negative value means there was an error. The value specifies what the error was, according to <linux/errno.h>.
• What is missing from the preceding list is the case of “there is no data, but it may arrive later.” In this case, the read
system call should block. We’ll deal with blocking input in Chapter 6.
write System Call
• write, like read, can transfer less data than was requested, according to the following rules for the return value:
• If the value equals count, the requested number of bytes has been transferred.
• If the value is positive, but smaller than count, only part of the data has been transferred. The program will most likely
retry writing the rest of the data.
• If the value is 0, nothing was written. This result is not an error, and there is no reason to return an error code. Once
again, the standard library retries the call to write.
• We’ll examine the exact meaning of this case in Chapter 6, where blocking write is introduced.
• A negative value means an error occurred; as for read, valid error values are those defined in <linux/errno.h>.
• some programmers are accustomed to seeing write calls that either fail or succeed completely, i.e. partial write is not
supported.
errno
• Both the read and write methods (Or any other method) return a negative value if an error occurs.
• A return value greater than or equal to 0, instead, tells the calling user space program how many bytes have been successfully
transferred.
• If some data is transferred correctly and then an error happens, the return value must be the count of bytes successfully
transferred
• kernel read and write implementations return a negative number to signal an error, and the value of the number indicates the
kind of error that occurred
• However, programs that run in user space always see –1 as the error return value
• They need to access the errno variable to find out what happened.
Concurrency problem
• There could be multiple processes operating on the same device, issuing the arbitrary system calls to perform operation on a
device.
• Lets say, process A is reading a device memory using read(), at offset 50, and reading 10 bytes.
• Meanwhile, process B opens the device using open() system call. Lets open() trim/truncate the device, updating the current
position pointer in device memory to zero.
• At this point, process A would see end of file, and read() would return 0 – no data to read.
• Thus if the device needs to be shared amongst several process – Mutual Exclusion is required in a driver code.
• Vector variants of read and write : readv and writev
• struct iovec structures are created by the application, but the kernel copies them into kernel space before calling the driver.
• Hence, struct iovec * is not a user space pointer in this case.
The Big picture
behind the scenes
• Appl wants to perform read/write operation with the new
Hardware in town, say printer.
• Corresponding to that printer, in /dev dir, present is the device
File (CDF) created using mknod cmd (or otherwise.)
• Application in the user space operate on CDF as if it is
a plane text file by calling usual system calls
read/write/open/close function.
• These system calls are a wrapper to corresponding system call
Written in a corresponding device driver linux module.
• Our user space application is innocent, it don’t know the
Hardware type it is communicating to (i.e.) it do not know how
to read/write the data from/to the hardware.
• All Our application knows is to only write/read Opns using
Read/write system calls.
• The Device driver sits in the kernel space which understands
the how to manipulate the printer specific I/O. Driver overrides
the generic user space read/write/open/close file operations
to device specific operations.
• Each device file has a major number which identifies the
Device driver associated with the device file which perform I/O
On behalf of user-space application.
Application -> file sys call(read/write etc) on device file -> sys call maps to corresponding fn handler in device driver Kernel module -> perform actual file I/O
on device.
Kernel Debugging
Techniques
Kernel API to print device major and minor numbers
• Occasionally, when printing a message from a driver, you will want to print the device number associated with the hardware
of interest
• the kernel provides a couple of utility macros (defined in <linux/kdev_t.h>) for this purpose :
A. int print_dev_t(char *buffer, dev_t dev);
B. char *format_dev_t(char *buffer, dev_t dev);
• Both macros encode the device number into the given buffer
Kernel API to print the user space process info in whose context the driver/kernel code is executing
#include <linux/sched.h>
struct task_struct *current_task = get_current();
printk(KERN_INFO "The process is "%s" (pid %i)n", current_task->comm, current_task->pid);
To ensure that certain module is successfully loaded in kernel and status of it
[root@localhost proc]# cat /proc/modules
myscull 3511 0 - Live 0x0b852000 (O)
Debugging using /proc file system
• The /proc filesystem is a special, software-created filesystem
• It is used to transfer information between userspace and kernel(including drivers)
• /proc file system is usually used to transfer information residing in kernel space to user space, but sometimes can be used to
transfer information the other dirn as well.
• /proc is just a directory structure with dir and files arranged in hierarchical fashion.
• Each file under /proc is tied to a kernel function that generates the file’s “contents” on the fly when the file is read
• /proc/modules, for example, always returns a list of the currently loaded modules.
• Many utilities on a modern Linux distribution, such as ps, top, and uptime, get their information from /proc
• Device drivers also export information via /proc
• The /proc filesystem is dynamic, so your module can add or remove entries at any time
• Entries in /proc can be written to as well as read from, Most of the time, however, /proc entries are readonly files
• The /proc filesystem is seen by the kernel developers as a bit of an uncontrolled mess that has gone far beyond its original
purpose (which was to provide information about the processes running in the system)
• Therefore, adding files under /proc is discouraged
• The recommended way of making information available in new code is via sysfs.
• files under /proc are slightly easier to create, and they are entirely suitable for debugging purposes
• its use is discouraged nowadays.
• More on this later ….
Concurrency and Race
Conditions
Introduction
• The management of concurrency is, however, one of the core problems in operating systems programming.
• Concurrency-related bugs are some of the easiest to create and some of the hardest to find
• One of the main source of Concurrency – servicing multiple hardware interrupts concurrently
• Device driver programmers must now factor concurrency into their designs from the beginning, and they must have a strong
understanding of the facilities provided by the kernel for concurrency management.
• Consider the following scenario:
• There is a hardware device memory indicated by device file in /dev, say myscull0
• Lets say, process p1 and p2 open the device file and trigger write() operation on it. Both process write some data.
• Depending on context switching of driver code (Kernel code is preemptible), memory write in driver may happen for
process p1 first then p2 , or the other way
• In both the cases, the data of one of the process which wrote first would going to be lost.
• This is an example of Race condition resulting in inconsistency – including crash, panic or memory leak etc
• Race conditions are a result of uncontrolled access to shared data
• Device driver code should be re-entrant, that should be written assuming there can be multiple user space process
attempting to access the same device at the same time.
• Hence, time to learn Concurrency control principles in kernel ecosystem
Driver Concurrent programming principles and guidelines
• Avoid having shared resources in the first place.
• Avoid Global Variables
• But, Hardware resources are, by their nature, shared. Sharing is a fact of life.
• The usual technique for access management is called locking or mutual exclusion—making sure that only one
thread of execution can manipulate a shared resource at any time
Mutual Exclusion - Semaphore
• Mutual Exclusion is achieved using Semaphores and Mutexes
• Linux implementation of Semaphores
• must include <linux/semaphore.h>
• type is struct semaphore
• Static Semaphore initialization macros and fns
• void sema_init(struct semaphore *sem, int val); where val is the initial value to assign to a semaphore.
• Short hand macros
• DECLARE_MUTEX(name);
Same as : struct semaphore sem; sema_init(&sem, 1);
• DECLARE_MUTEX_LOCKED(name);
Same as : struct semaphore sem; sema_init(&sem, 0);
• Dynamic Or runtime semaphore initialization
void init_MUTEX(struct semaphore *sem); // initialized with initial val = 1
void init_MUTEX_LOCKED(struct semaphore *sem); // initialized with initial val = 0
Mutual Exclusion – P (down)
• down() decrements the value of the semaphore and if < 0, blocks the execution flow
• There are three versions of down:
• void down(struct semaphore *sem);
>> non interruptible , may lead to non killable processes
• int down_interruptible(struct semaphore *sem);
>> Interrupting by user. if the operation is interrupted, the function returns a nonzero value, and the caller does not hold the semaphore. The fn return zero
on success, and non zero value if the operation was interrupted. Hence a check on return value should be placed
• int down_trylock(struct semaphore *sem);
>> Non Blocking
• Once a thread has successfully called one of the versions of down, it is said to be “holding” Or “acquired” the semaphore
• When the operations requiring mutual exclusion are complete, the semaphore must be returned. The Linux equivalent to V is
up (Signal). Next slide.
Mutual Exclusion – V (up)
• Once up has been called, the caller no longer holds the semaphore.
• up increments the value of the semaphore
• any thread that takes out a semaphore is required to release it with one (and only one) call to up
• Fn prototype
void up(struct semaphore *sem);
Comparing with user space locking primitives
For sem initialzed with 1 :
down() = pthread_mutex_lock()
up() = pthread_mutex_unlock()
The major difference is that, pthread_mutex_unlock() should be called by the thread which called pthread_mutex_lock().
However, up() can be called by another kernel thread and kernel thread which called down() and got blocked, would resume.
Hence, up() also behaves like signal.
Reader/Writer Semaphore (rarely used)
• struct semaphore provides mutual exclusion to critical section irrespective the nature of operation (read or write) the threads
intend to perform on a shared resource
• This locking mechanism is highly undesirable in a situation where all resource competing threads intends to perform read
operation only.
• This gives rise to – reader writer problem
• The Linux kernel provides a special type of semaphore called a rwsem (or “reader/writer semaphore”) for this situation
defined in <linux/rwsem.h>
• struct rw_semaphore
• Initialized at runtime using :
void init_rwsem(struct rw_semaphore *sem);
Read lock
void down_read(struct rw_semaphore *sem); up_read(struct rw_semaphore *sem)
void down_write_trylock(struct rw_semaphore *sem); returns non zero on success, else non-zero
Write lock
void down_write(struct rw_semaphore *sem); up_write(struct rw_semaphore *sem)
void down_write_trylock(struct rw_semaphore *sem); returns non zero on success, else non-zero
Reader/Writer Semaphore (rarely used)
• Note that down_read may put the calling process into an uninterruptible sleep
• down_read_trylock will not wait if read access is unavailable; it returns nonzero if access was granted, 0 otherwise.
• If you have a situation where a writer lock is needed for a quickchange, followed by a longer period of readonly access, you
can use downgrade_write to allow other readers in once you have finished making changes.
void downgrade_write(struct rw_semaphore *sem); up_write(struct rw_semaphore *sem);
• A rwsem allows either one writer or an unlimited number of readers to hold the semaphore.
• Writers get priority; as soon as a writer tries to enter the critical section, no readers will be allowed in until all writers have
completed their work.
• This implementation can lead to reader starvation—where readers are denied access for a long time—if you have a large
number of writers contending for the semaphore.
• For this reason, rwsems are best used when write access is required only rarely, and writer access is held for short periods of
time.
Completions (introduced in kernel 2.4.7)
Performance degradation with semaphores in situations where caller thread has to wait for external activity to complete
A common pattern in kernel programming involves initiating some activity outside of the current thread, then waiting for that
activity to complete. This activity can be the creation of a new kernel thread or user-space process, a request to an existing
process, or some sort of hardware-based action.
It such cases, it can be tempting to use a semaphore for synchronization of the two tasks, with code such as:
struct semaphore sem;
init_MUTEX_LOCKED(&sem); // initialse the semaphore with val 0
start_external_task(&sem); // start some external activity in new kernel thread
down(&sem) // make the current thread wait
The external task can then call up(&sem) when its work is done. This resume the caller thread.
• if there is significant contention for the semaphore by the external thread, performance suffers
• When used to communicate task completion in the way shown above, however, the thread calling down will almost always
have to wait
Completions (contd …)
• As a solution to the problem we discussed, the mechanism of “Completions” was introduced in linux kernel
Completions are a lightweight mechanism with one task:
allowing one thread to tell another that the job is done
• To use completions, your code must include <linux/completion.h>
• URL : https://www.kernel.org/doc/Documentation/scheduler/completion.txt
• If you have one or more threads of execution that must wait for some process to have reached a point or a specific state,
completions can provide a race-free solution to this problem.
• Semantically they are somewhat like a pthread_barrier and have similar use-cases.
• Code is found in kernel/sched/completion.c
Completions (contd …)
• Declaring a Completion
statically
DECLARE_COMPLETION(my_completion);
dynamically
struct completion my_completion;
init_completion(&my_completion);
• Waiting for the completion is a simple matter of calling:
void wait_for_completion(struct completion *c);
Note that above function performs an uninterruptible wait. If your code calls wait_for_completion and nobody ever
completes the task, the result will be an unkillable process !!
• On the other side, the actual completion event may be signalled by calling one of the following:
void complete(struct completion *c);
void complete_all(struct completion *c);
Completions (contd …)
• The complete() calls behave differently if more than one thread is waiting for the same completion event
• complete() : wakes up only one of the waiting threads
• complete_all() : allows all of the waiting threads on wait_for_completion() to proceed.
In most cases, there is only one waiter, and the two functions will produce an identical result
Completion structure is a one shot tool
• Re-initialize the structure before using it again using macro :
INIT_COMPLETION(struct completion c);
Q. How using completion as a technique to wait for the event completion is different from the one implemented using
semaphore ?
Spin locks
• Another locking mechanism in kernel is the spin locks
• Spin locks are used for deploying mutual exclusion mechanism in a code that MUST not sleep (as a result of calling blocking
call which results in context switching – wait_for_completion() OR down())
• Unlike Semaphores, Spin locks avoids context switching, hence performance much better than semaphores in routines which
are not suppose to sleep such as interrupt handlers.
• When a kernel thread tries to lock the already locked semaphore(binary semaphore), it go to sleep, context switching
happens.
• When a kernel thread tries to lock the already locked spin-lock, it goes into tight loop testing (“test”) the state of the lock in a
loop until it becomes available (“set”) , no context switching happens
• The “test and set” operation must be done in an atomic manner so that only one thread can obtain the lock, even if several
are spinning at any given time
• spinlocks are, by their nature, intended for use on multiprocessor systems
• Next we study Spin lock API
Spin lock kernel API
• To be included : <linux/spinlock.h>
• Data structure type : spinlock_t
• Initialization
compile time
spinlock_t my_lock = SPIN_LOCK_UNLOCKED;
run time
void spin_lock_init(spinlock_t *lock);
• Grab the lock
void spin_lock(spinlock_t *lock);
Note that all spinlock waits are, by their nature, uninterruptible. Once you call spin_lock, you will spin until the
lock becomes available.
• Release the lock
void spin_unlock(spinlock_t *lock);
Rules for using spin locks to avail high performance
• the core rule that applies to spinlocks is that :
any code must, while holding a spinlock, be atomic, i.e. It cannot sleep; in fact, it cannot relinquish the processor for any
reason except to service interrupts (and sometimes not even then).
Reason : if your kernel thread holding a spinlock is put to sleep, the new thread given cpu may happen to ask for the same
lock, since it is already locked by the thread (whch is sleeping now), the new thread will spin until the sleeping thread is
loaded back and release the lock – a performance hit
• Hence, your kernel code should not call any fn which triggers context switching
• But, what if natural kernel preemption kicks in ?
Any time kernel code holds a spinlock, natural kernel preemption is disabled on the local processor
• Writing code that will execute under a spinlock requires paying attention to every function that you call. Avoid using fn
that triggers processor preemption. Examples of such fns could be :
• Copying data to or from user space : the required user-space page may need to be swapped in from the disk before
the copy can proceed
• kmalloc
Rules for using spin locks to avail high performance
• But, What will happen if the current thread holding a spinlock is put to sleep because of interrupt raised and interrupt
handler need to access the same critical section ?
Two scenarios arise here :
1. If the interrupt service routine (ISR) is assigned another processor, say, p1
• ISR is loaded on processor p1
• ISR waits to acquire the spin lock from current thread t1 (running on processor p2)
• t1 runs to completion and release spinlock
• ISR grabs the lock and runs atomically and release the spin lock when done
2. if the interrupt service routing is assigned the same processor on which the kernel thread holding a spin
lock is currently executing
• t1 is put to sleep immediately
• ISR is loaded on processor
• ISR loops over spin lock which was held by t1 which is not sleeping
• Deadlock !!
Conclusion : Hardware interrupts needs to be disabled on the processor on which the spin lock has been held
by any kernel thread running if scenario 2 is likely to take place
Spin Locks Summary
When spin locks are used
• Natural kernel context switching is disabled on the current processor
• Interrupt lines are disabled on current processor
• Programmer MUST not call sleep triggering fns in his code
• spinlocks must always be held for the minimum time possible.
• spinlocks are designed only to be used in code which should not sleep – interrupt handlers
• spinlocks are, by their nature, intended for use on multiprocessor systems
• Kernel Preemption can occur when
• When returning to kernel-space from an interrupt handler
• When kernel code becomes pre-emptible again
• If a task in the kernel explicitly call a schedule
• If a task in the kernel blocks
Spin Locking APIs
Interrupt handlers
Software IH Hardware IH
(generated by soft irq) (generated by hard irq)
• As we have seen earlier, not disabling interrupts while using spinlocks in interrupt handlers can lead to Deadlocks
• Depending on the situation whether any software IH or Hardware IH could potentially access the shared resource, you should use the
appropriate spin_lock() calls for disabling the interrupts.
spin_lock()/spin_unlock() Disable all forms of interrupts on a local processor
spin_lock_irq()/spin_unlock_irq() Disable only hardware based interrupts on a local processor
spin_lock_bh()/spin_unlock_bh() Disable only software based interrupts in a local processor
void spin_lock_irqsave()/void
spin_unlock_irqrestore
disables interrupts (on the local processor only) before taking the
spinlock; the previous interrupt state is stored in flags
Reader/Writer Spinlocks
• Skipping ….
Do’s and Dont’s while using locks
Locking twice the same resource
• Define the fns which lock the resource in a header file for others to use. Implicit internal fns which assumes the resource has
been locked by the caller, either should be internal static fns, or well documented.
• The fn which has locked the resource should not call other fn which in turn attempt to lock the resource again – Deadlock !
Lock Ordering Rules
• Can lead to deadlock again.
• Eg :
lock(r1) lock(r2)
lock(r2) lock(r1)
Soln : Follow the same sequence of locking the multiple resources
Try to avoid code acquiring multiple locks in the first place.
Combination of Semaphores and Spinlocks
• In situations where you need to use both , the semaphore and spinlock at the same time, Always use semaphore first before
spin_lock.
• The other way will lead to deadlock, and is a serious error
Do’s and Dont’s while using locks
Fine- Versus Coarse-Grained Locking
• you should start with relatively coarse locking unless you have a real reason to believe that contention could be a problem
• Avoid very coarse grained locks as well.
• If you do suspect that lock contention is hurting performance, you may find the lockmeter tool useful. This patch (available at
http://oss.sgi.com/projects/lockmeter/) instruments the kernel to measure time spent waiting in locks.
• By looking at the report, you are able to determine quickly whether lock contention is truly the problem or not.
Lock-Free Algorithms
• If possible, try to design your data structure as a lock free data structure
• Eg : circular buffers, available at <linux/kfifo.h>
Atomic Variables
• Try use Atomic variables and related kernel API to manipulate them, if a shared resource is as simple as an integer variable.
• the kernel provides an atomic integer type called atomic_t, defined in <asm/atomic.h>.
• It cannot hold integer value larger than represented by 24 bits
• Refer pg 125, LDD3, for API.
Do’s and Dont’s while using locks
Atomic Bit Operations
• The atomic_t type is good for performing integer arithmetic. It doesn’t work as well, however, when you need to manipulate
individual bits in an atomic manner.
• the kernel offers a set of functions that modify or test single bits atomically
• Atomic bit operations are very fast, since they perform the operation using a single machine instruction without disabling
interrupts (probably it is true for operations on atomic_t variable as well)
• Bit operations are defined in <asm/bitops.h>
• API : pg 127 , LDD3
NOTE : For the multiple threads of the same process in kernel space, there is only one and only one struct task_struct process
instance in kernel space.
Seq Locks - lockless access to a shared resource
• Skipping …
RCU locking mechanism
• Skipping …
Chapter 6
Advanced Driver
Operations
IOCTL (Input Output Control)
• We start with implementing the ioctl system call, which is a common interface used for device control
• ioctl system call controls and perform various other operations on a device
• ioctl is often the easiest and most straightforward choice for true device operations
• To be included - <linux/ioctl.h>
• In user space, the ioctl system call has the following prototype:
int ioctl(int fd, unsigned long cmd, ...);
• Note that, the third argument is not a variable argument, but one single optional argument whose type checking is
ignored at the compile time
• The actual nature of the third argument depends on the specific control command being issued (the second
argument)
• Some commands take no arguments, some take an integer value, and some take a pointer to other data. Using a
pointer is the way to pass arbitrary data to the ioctl call; the device is then able to exchange any amount of data
with user space.
• the value of the ioctl cmd argument is not currently used by the kernel, and it’s quite unlikely it will be in the future
IOCTL (Input Output Control)
• Ioctl() prototype in driver side
int (*ioctl) (struct inode *inode, struct file *filp,
unsigned int cmd, unsigned long arg);
• Let us brief about the arguments passed to above ioctl callI:
• The inode and filp pointers are the values corresponding to the file descriptor fd passed on by the application and
are the same parameters passed to the open method
• The cmd argument is passed from the user unchanged, it should be unique SYSTEM-WIDE
• the optional arg argument is passed in the form of an unsigned long, regardless of whether it was given by the user
as an integer or a pointer.
• Because compile time type checking is disabled on the extra argument arg, the compiler can’t warn you if an
invalid argument is passed to ioctl, and any associated bug would be difficult to spot
Most ioctl implementations consist of a big switch statement that selects the correct behavior according to the cmd
argument
IOCTL (Input Output Control)
• As stated, the middle argument of user space ioctl fn: int cmd argument should be unique system wide
• There are some linux kernel convention defined to choose a unique cmd arg value
int cmd
• The 32 bit cmd is splitted into smaller bit groups, each have specific meaning as shown above.
• The majic number is a unique 8-bit number, which is specific to a device. Every hardware device in the system which is being
operated by the driver has a unique majic number. You should choose the majic number which has not been taken already
• Taken/reserved majic numbers are defined in Documentation/ioctl-number.txt.
• Since 14 bits are available for specifying the size of user data to be passed to driver, using single ioctl call, more than 2^14
bytes of data cannot be passed to the driver
Eg : _IOW(SCULL_IOC_MAGIC, 2, int) – gives the system wide unique cmd no
Majic no = SCULL_IOC_MAGIC, Cmd seq no = 2 ,
Data transfer dirn = WRITE (bcoz write macro is used), size of data = sizeof(int)
8 bit majic number 8 bit sequence no. 2 bits data
transfer dirn
14 bits : size of data specified in 3rd
argument
IOCTL (Input Output Control)
• Given a 32 bit number, which represents the cmd , the 2nd arg to ioctl user space call, In driver we can extract each
component from the cmd using following macros
• _IOC_TYPE(cmd) -- returns majic no
• _IOC_NR(cmd) -- returns operation sequence no
• _IOC_DIR(cmd) -- returns dir, which is _IOC_READ , means, user program wants to read the device memory
-- or _IOC_WRITE, mean, user program wants to write into the device memory
Blocking IO
• How does a driver respond to (ioctl or read operation) if it cannot immediately satisfy the request?
• A call to read may come when no data is available, but more is expected in the future
• Or a process could attempt to write, but your device is not ready to accept the data, because your output buffer is full.
• your driver should (by default) block the process, putting it to sleep until the request can proceed
• There are, however, a couple of rules that you must keep in mind to be able to code sleeps in a safe manner.
Rule 1 :
Never sleep when you are running in an atomic context.
• An atomic context is simply a state where multiple steps must be performed without any sort of concurrent access.
• our driver should not sleep while holding a spinlock, seqlock, or RCU lock.
• You also cannot sleep if you have disabled interrupts.
• It is legal to sleep while holding a semaphore though, but such code should be written very carefully.
• any process that sleeps must check to be sure that the condition it was waiting for is really true when it wakes up again
Rule 2 :
you can make no assumptions about the state of the system after the thread wakes up, and it must check to ensure that the
condition you were waiting for is, indeed, true.
Blocking IO
• Rule 3 :
process cannot sleep unless it is assured that somebody else, somewhere, will wake it up
To accomplish the goals of controlled sleeping of process, we use wait queue. A wait queue is just what it sounds like: a list
of processes, all waiting for a specific event.
wait queue
• Data type : wait_queue_head_t
• Define in <linux/wait.h>
• Static Initialization
DECLARE_WAIT_QUEUE_HEAD(name);
• Dynamic initialization
wait_queue_head_t my_queue;
init_waitqueue_head(&my_queue);
Blocking IO – sleep and wake up
• Sleep
A macro called wait_event is used to put the process to sleep. The forms of wait_event are:
• queue is the wait queue head to use. Notice that it is passed “by value”
• The condition is an arbitrary boolean expression that is evaluated by the macro before and after sleeping;
• until condition evaluates to a true value, the process continues to sleep.
• Note that condition may be evaluated an arbitrary number of times, so it should not have any side effects.
• If you use wait_event, your process is put into an uninterruptible sleep
• The preferred alternative is wait_event_interruptible, which can be interrupted by signals. If it returns non zero value,
meaning the process if awakened by signals, else return zero if condn is true. You should return –ERESTARTSYS error in
former case.
• The final versions (wait_event_timeout and wait_event_interruptible_timeout) wait for a limited time; after that time
period expires, the macros return with a value of 0 regardless of how condition evaluates
• The condn should be tested automically
Blocking IO – sleep and wake up
• wake
• Some other thread of execution (a different process, or an interrupt handler, perhaps) has to perform the
wakeup for you, since your process is, of course, asleep. The basic function that wakes up sleeping processes is called
wake_up
A macro called wake_event is used to put the process to sleep. The forms of wake_event are:
• wake_up wakes up all processes waiting on the given queue
Blocking & Non Blocking IO
• Explicitly nonblocking I/O is indicated by the O_NONBLOCK flag in filp->f_flags.
• The flag is defined in <linux/fcntl.h>, which is automatically included by <linux/fs.h>
• The flag is cleared by default, because the normal behavior of a process waiting for data is just to sleep
Behavior of blocking operation
read
• If a process calls read but no data is (yet) available, the process must block. The process is awakened as soon as some data
arrives, and that data is returned to the caller, even if there is less than the amount requested in the count argument
to the method
write
• If a process calls write and there is no space in the buffer, the process must block. The process is awakened and the write call
succeeds, although the data may be only partially written if there isn’t room in the buffer for the count bytes that were
requested
• Both these statements assume that there are both input and output buffers; in practice, almost every device driver has them.
• The input buffer is required to avoid losing data that arrives when nobody is reading
• In contrast, data can’t be lost on write, because if the system call doesn’t accept data bytes, they remain in the user-space
buffer. Even so, the output buffer is almost always useful for squeezing more performance out of the hardware.
Blocking & Non Blocking IO
Behavior of Non-blocking operation
• The behavior of read and write is different if O_NONBLOCK is specified. In this case, the calls simply return -EAGAIN (“try it
again”) if a process calls read when no data is available or
• if it calls write when there’s no space in the buffer.
• Non-blocking operations return immediately, allowing the application to poll for data
• Applications can easily mistake a non-blocking return for EOF as “data not available”.
• Thus, They always have to check errno.
• Only the read, write, and open file operations are affected by the non-blocking flag.
Non Blocking IO
Existing Non-blocking system calls :
poll, select, and epoll
• This support(for all three calls) is provided through the driver’s poll method. This method has the following prototype:
unsigned int (*poll) (struct file *filp, poll_table *wait); // in kerne 2.6.x
unsigned int (*poll) (struct file *, struct poll_table_struct *); // in kernel 4.1.x
• The driver’s poll method is called whenever the user-space program performs a poll, select, or epoll system call
• In select specify the set of file descriptors. Select() system all iterates over all FD on the fd_set and call the driver’s poll()
function for each device represented by fd.
• In Poll() method, we need to call the poll_wait(), implements the actual polling functionality.
void poll_wait(struct file * filp, wait_queue_head_t * wait_address, poll_table *p)
• poll_wait() adds the current process to the waitQ (2nd argument), and add the waitQ to the poll table. There is one poll
table per process and same is passed as the 3rd arg to poll().
• Then poll() returns, all entries in poll table is flushed and repopulated whe poll_wait() is called again
• Poll_table() is something to be used internally by the kernel
Non Blocking IO – select()/poll()
• The poll() fn performs its task in two phases :
• Query phase
• Processing phase
• Query phase :
• When select system call is called in user space, driver’s poll() fn is triggered for each device whose FDs are specified in
fd_set passed to select()
• If Data Is Not available
• If O_NONBLOCK is set
• Return immediately –EAGAIN from poll()
• If O_NONBLOCK is not set
• Poll_wait() and return 0 -- returning zero cause kernel to block on poll()
• If Data Is Available
• If O_NONBLOCK is set
• return POLLIN|POLLRDNORM -- this sets the FD in user space and unblocks the select system call
• If O_NONBLOCK is not set
• return POLLIN|POLLRDNORM -- this sets the FD in user space and unblocks the select system call
• As you see, the actual data read do not happen in the query phase of polling, driver just tells the user space about the
status of the data whether ready to be read or written
Non Blocking IO – select()/poll()
• The poll() fn performs its task in two phases :
• Query phase
• Processing phase
• Processing phase:
• The actual data read or written is done in this phase.
• When this phase completes , select() in user space completes one complete poll and again made to block on select() call
For next poll iteration.
Summarizing the sequence of control flow for a typical blocking, and data not available scenario
Select() ---- > poll() ---- > poll_wait() ---- > return 0 (block) ---- > wakeup() ---- > poll() ---- > return POLLIN|POLLRDNORM
---- > select() (unblocks) ---- > check fd_set ---- >perform OP (say, read) ----> select() (one iteration complete)
The same and detailed flow is depicted in the following rough sketch is on next slide
Different locking mechanism in Linux kernel
• struct semaphore sem
• Lock and unlock mechanism
• struct rw_semaphore
• Handle reader writer problem
• struct completion
• Go and complete this work and when you complete notify me, I am waiting until then
• Spinlocks
• Lock and unlock mechanism without sleep
• Reader/writer spinlocks
• Lock and unlock mechanism without sleep, handle reader writer problem
• RCU locking
• Seq locks
• Wait queue
• Implement blocking system calls
Reader(s)/writer(s) Synchronization Mechanism
struct task_struct
structure
struct task_struct structure (defined in <linux/sched.h>)
• How to Manipulate the state of a process in kernel
• struct task_struct *tsk
• tsk->state;
The above member can take the following values
• #define TASK_RUNNING 0 // not necessarily running on CPU, but runnable
• #define TASK_INTERRUPTIBLE 1 // denotes that process is sleeping
• #define TASK_UNINTERRUPTIBLE 2 // denotes that process is sleeping
• #define __TASK_STOPPED 4
• #define __TASK_TRACED 8
• tsk->exit_state
The above member can take the following values
• #define EXIT_DEAD 16
• #define EXIT_ZOMBIE 32
• #define EXIT_TRACE (EXIT_ZOMBIE | EXIT_DEAD)
• Setting the current state of the process
• void set_current_state(int new_state);
• current->state = TASK_INTERRUPTIBLE; // changing the state directly is discouraged.
• changing the current state of a process does not, by itself, put it to sleep or wakeup. By changing the current state, you
have changed the way the scheduler treats a process, but you have not yet yielded the processor.
struct task_struct structure (defined in <linux/sched.h>)
• How to Manipulate the state of a process in kernel
• struct task_struct *tsk
• tsk->pid // give the process id
• Tsk->comm // give the process name (name of binary)
Chapter 8
Memory Allocation
kmalloc
• Thus far, we have used kmalloc and kfree for the allocation and freeing of memory
• the kernel offers a unified memory management interface to the drivers.
• Kmalloc
• The function is fast (unless it blocks), its pre-emptible call
• doesn’t clear the memory it obtains; the allocated region still holds its previous content
• The allocated region is also contiguous in physical memory.
#include <linux/slab.h>
void *kmalloc(size_t size, int flags);
• flags, 2nd argument – controls the behavior of kmalloc in no of ways
• GFP_KERNEL :
• means that the allocation is performed on behalf of a process running in kernel space.
• Using GFP_KERNEL means that kmalloc can put the current process to sleep waiting for a page when called in
low-memory situations.
• A function that allocates memory using GFP_KERNEL must, therefore, be reentrant and cannot be running in
atomic context.
• GFP_KERNEL isn’t always the right allocation flag to use; sometimes kmalloc is called from outside a process’s
context. For instance, in interrupt handlers, tasklets, and kernel timers. In this case, the current process should
not be put to sleep, and the driver should use a flag of GFP_ATOMIC instead.
Kmalloc flags
• GFP_ATOMIC
• The kernel normally tries to keep some free pages around in order to fulfill atomic allocation.
• When GFP_ATOMIC is used, kmalloc can use even the last free page. If that last page does not exist, however,
the allocation fails.
• All flags defined in
<linux/gfp.h>
• GFP_USER
• Used to allocate memory for user-space pages; it may sleep.
• GFP_HIGHUSER
• The allocation flags listed above can be augmented by an ORing in any of the following flags, which change how the
allocation is carried out:
• __GFP_DMA -- only the DMA capable zone is search to allocate a page
• __GFP_HIGHMEM -- all three zones (nxt slide) are used to search and allocate a free page.
• __GFP_COLD
• __GFP_NOWARN
• __GFP_REPEAT
• __GFP_NOFAIL
• __GFP_NORETRY
Memory Zones
• The Linux kernel knows about a minimum of three memory zones:
• DMA-capable memory,
• It is a memory that lives in a preferential address range, where peripherals (hardwares) can perform DMA access
• On the x86, the DMA zone is used for the first 16 MB of RAM
• Normal memory, and
• If no special flag (starting with double score) is present both normal and DMA memory are searched for allocation
• High memory
• High memory is a mechanism used to allow access to (relatively) large amounts of memory on 32-bit platforms
• This memory cannot be directly accessed from the kernel without first setting up a special mapping and is generally
harder to work with
The mechanism behind memory zones is implemented in mm/page_alloc.c
Kmalloc agument size
• The kernel manages the system’s physical memory, which is available only in page sized chunks.
• As a result, kmalloc looks rather different from a typical user-space malloc implementation
• Kernel memory doesn’t have ownership to any particular process – it is for all. So no Question of virtual address based
memory management in kernel.
• the kernel uses a special page-oriented allocation technique, and not the one used in user space, to get the best use
from the system’s RAM.
• Linux handles memory allocation by creating a set of pools of memory objects of fixed sizes. Allocation requests are handled
by going to a pool that can holds sufficiently large objects and handing an entire memory chunk back to the requester.
• kernel can allocate only certain predefined, fixed-size byte arrays (how much ?). If you ask for an arbitrary amount of memory,
you’re likely to get slightly more than you asked for, up to twice as much.
• that the smallest allocation that kmalloc can handle is as big as 32 or 64 bytes, depending on the page size used by the
system’s architecture.
• There is an upper limit to the size of memory chunks that can be allocated by kmalloc.
• If your code is to be completely portable, it cannot count on being able to allocate anything larger than 128 KB.
• If you need more than a few kilobytes, however, there are better ways than kmalloc to obtain memory
Fragmentation
First let us understand the problem arises of memory allocation :
Fragmentation
• External fragmentation
• Internal fragmentation you tube link
Fragmentation
• The concept of paging resolves the problem of fragmentation – external and internal at process level.
• With in a process, while allocating memory objects (malloc/calloc) and freeing them at run time(free), process’s virtual
address space suffers from external and internal fragmentation.
• Slab allocation scheme attempts to solve the problem of fragmentation by categorizing uniform size objects together
• The algorithms – Best fit, Worst fit and first fit do not solve the problem of fragmentation, but mitigate it to some extent
Kernel look aside caches (also called slab allocator)
• It’s a technique to get rid of fragmentation in kernel memory.
• It is employed and beneficial to use in drivers or kernel programming when the objects of samesize needs to be created or
destroyed frequently by the kernel module.
• Look aside caches is the pool of memory of objects of same size.
• Data type of look aside caches : kmem_cache_t
• API to create look aside cache
• This API creates a new cache object/look aside cache that can host any number of memory areas all of the same size,
specified by the size argument
Kernel look aside caches (also called slab allocator)
Arguments :
name - character name of the lookaside cache
size - size of each object hosted by the cache
offset - most likely 0, it is the offset of the first object in the page
flags - control how allocation should be done
• SLAB_NO_REAP - - Setting this flag protects the cache from being reduced when the system is looking for memory.
• SLAB_HWCACHE_ALIGN - - This flag requires each data object to be aligned to a cache line
• SLAB_CACHE_DMA -- This flag requires each data object to be allocated in the DMA memory zone.
Constructor - - used to initialize the data object in look aside cache
Destructor - - used to de-initialize the data object in look aside cache
You cannot assume that the constructor will be called as an immediate effect of allocating an object. Similarly, destructors can be called at some unknown future time,
not immediately after an object has been freed
Kernel look aside caches (also called slab allocator)
Constructor and destructor
• To make the const or dest atomic, pass the flag SLAB_CTOR_ATOMIC as the 3rg arg to the constructor/dest.
• For convenience, a programmer can use the same function for both the constructor and destructor; the slab allocator always
passes the SLAB_CTOR_CONSTRUCTOR flag when the callee is a constructor.
API to actually create an object hosted in look aside cache region
• Once the look aside cache has been created, following API is used to create objects of fixed size
void *kmem_cache_alloc(kmem_cache_t *cache, int flags);
Arguments :
• cache – pointer to the slab cache/look aside cache
• flags - same as you would have passed to kmalloc
Kernel look aside caches (also called slab allocator)
API to free an object hosted in look aside cache region
void kmem_cache_free(kmem_cache_t *cache, const void *obj);
Arguments :
• cache – pointer to the slab cache/look aside cache
• obj - pointer to the cache object to be freed
API to free the look aside cache (usually when driver is unloaded)
void kmem_cache_destroy(kmem_cache_t *)
• The destroy operation succeeds only if all objects allocated from the cache have been returned to it.
• A module should check the return status from kmem_cache_destroy; a failure indicates some sort of memory leak within
the module
• kernel maintains statistics on cache usage. These statistics may be obtained from /proc/slabinfo.
• For internal design of the slab allocator, pls read the paper by Jeff Bonwick’s
mempools
• The slab allocator improves the performance by not repeatedly creating and destroying the objects.
• Improves memory usage efficiency by reducing internal and external fragmentation
• Eradicates most of the need to interact with low level Virtual memory API and memory hardware.
• However, it doesn’t guarantee whether the new memory (slab) allocation Or de-allocation(slab) will be successful or not
depending on the stress on kernel VM.
• There are scenarios, when the driver, when requesting a memory from VM, needs assurance that request MUST complete
whatever be the case.
• To meet such requirement of assurance, mempools comes into picture.
• A memory pool is really just a form of a lookaside cache that tries to always keep a list of free memory around for use in
emergencies.
• mempools allocate a chunk of memory that sits in a list, idle and unavailable for any real use. Hence, It is easy to consume a
great deal of memory with mempools
• When mempools are used, Is is also often desired to harness the advantage of slab allocators on top of memory pools.
• Let us discuss the mempools API provided by linux kernel
Mempools API
• A memory pool has a type of mempool_t (defined in <linux/mempool.h>);
• Creation of new pool
Where, alloc_fn and free_fn are user defined fn used for alloc and free the objects
min_nr – the minimum no of objects the pool must create when pool is created.
pool_data – is the pointer to the data to be used by alloc_fn and free_fn for creating and destroying objects.
It is passed as an argument to alloc_fn and free_fn
alloc_fn and free_fn prototype:
Mempools API
• Once the pool is created, Objects can be created and destroyed by following APIs:
As, the above two fns take mempool as an argument, they call the registered alloc/free fn with mempool to alloc and
free the objects.
Hence, mempools are pools of memory of specific object types, because the above two specifies do not permit the
programmer to specify the object layout to be created.
mempools are used in conjunction with slab allocators.
Resizing and destroying the mempools
int mempool_resize(mempool_t *pool, int new_min_nr, int gfp_mask);
void mempool_destroy(mempool_t *pool); -- All objects should be freed first
Get entire memory pages
• If a module needs to allocate big chunks of memory, it is usually better to use a page oriented technique.
• Requesting whole pages also has other advantages
To allocate pages, the following functions are available:
The flags argument works in the same way as with kmalloc; usually either GFP_KERNEL or GFP_ATOMIC.
order is the base-two logarithm of the number of pages you are requesting or freeing (i.e., log2N). For example, order is 0 if
you want one page and 3 if you request eight pages. If order is too big (no contiguous area of that size is available), the page
allocation fails. The maximum allowed value for order is 10 or 11
If you are curious, /proc/buddyinfo tells you how many blocks of each order are available for each memory zone on the
system
Freeing memory pages
• When a program is done with the pages, it can free them with one of the following macro/functions :
• If you try to free a different number of pages from what you allocated, the memory map becomes corrupted, and the system
gets in trouble at a later time
• Working with pages removes the internal fragmentation altogether.
• The main advantage of page-level allocation isn’t actually speed, but rather more efficient memory usage. Allocating by pages
wastes no memory, whereas using kmalloc wastes an unpredictable amount of memory because of allocation granularity.
• But the biggest advantage of the __get_free_page functions is that the pages obtained are completely yours, and you could,
in theory, assemble the pages into a linear area by appropriate tweaking of the page tables. More on this later.
• It’s worth stressing that memory addresses returned by kmalloc and _get_free_pages are also virtual addresses.
Freeing memory pages
Alternate APIs to work with page oriented memory allocations and deallocations
• Last two functions are wrapper over first API.
• Flags are usual GFP_ flags.
• Return value is the ptr to the first bytes of the first page allotted.
• Nid – NUMA id
To release pages allocated in this manner, you should use one of the following:
If you have specific knowledge of whether a single page’s contents are likely to be resident in the processor cache, you should
communicate that to the kernel with free_hot_page (for cache-resident pages) or free_cold_page. This information helps
the memory allocator optimize its use of memory across the system
Vmalloc and ioremap
Skipping ….
Chapter 10
Interrupt Handling
So how does an I/O device 'interrupt' the CPU?
• I/O device asserts (place the electrical signals on) an interrupt request line
• An interrupt request line consists of special circuitry added to the motherboard of the computer system, that lets an I/O
device send a signal directly to the CPU
• when an interrupt enabled I/O device is ready to receive or transfer data, it asserts an interrupt request line, thus issuing to
the CPU an interrupt request (IRQ)
• When the CPU detects an interrupt request, it temporarily suspends it's work to service the request
• Sources :
• http://support.tenasys.com/INtimeHelp_5/ovw_interrupt.html
• http://home.agh.edu.pl/~kozlak/PS2010/interrupts2.html
• http://www.cs.mcgill.ca/~cs573/fall2002/notes/lec273/lecture20/20_2.htm (imp)
• https://www.youtube.com/watch?v=cxkq8jIk7y0
Interrupt Handling
• There are two types of interaction between the CPU and the rest of the computer's hardware :
• When CPU gives orders to hardware
• When hardware asks CPU to respond - these are called interrupts
Hardware devices typically have a very small amount of RAM, and if you don't read their information when available, it is
lost. Hence Interrupts should be services asap.
Under Linux, hardware interrupts are called IRQ's (InterruptRequests)
IRQs are of two types :
• short
• A short IRQ is one which is expected to take a very short period of time of processing, during which the rest of the machine
will be blocked and no other interrupts will be handled
• All interrupts are disabled on a current processor servicing a short interrupt.
• Short interrupt handler MUST not sleep., i.e. the process on behalf of which the interrupt it executing should be in
TASK_RUNNING state, or else, system freeze can occur. Therefore, natural context switching is also disabled on that processor.
• Must finish its tasks asap by either finishing it Or deferring it to bottom half routines.
• Therefore, interrupt handlers cannot perform any blocking procedure
• long
• A long IRQ is one which can take longer, and during which other interrupts may occur (but not interrupts from the same
device)
• If at all possible, it's better to declare an interrupt handler to be long.
Service flow of an interrupt
• When the CPU receives an interrupt,
• it stops whatever it's doing (unless it's processing a more important interrupt, in which case it will deal with this one only
when the more important one is done)
• saves certain parameters on the stack and calls the interrupt handler. This means that certain things are not allowed in
the interrupt handler itself, because the system is in an unknown state.
• The solution to this problem is for the interrupt handler to do what needs to be done immediately, usually read
something from the hardware or send something to the hardware, and then schedule the handling of the new
information at a later time (this is called the "bottom half") and return.
• The kernel is then guaranteed to call the bottom half as soon as possible −− and when it does, everything allowed in
kernel modules will be allowed.
Sharing the same interrupt line
• For the same interrupt number, several devices can be registered.
• Meaning same interrupt handler is invoked by multiple devices sharing the same IRQ number.
• When interrupt handler corresponding the shared IRQ number is invoked, it executes the number of interrupt service routine
which are device specific.
• Thus, when interrupt handler is invoked on a shared IRQ line, we don’t know which device on that IRQ line has generated an
interrupt.
• The kernel must discover which I/O device corresponds to the IRQ number before enabling interrupts.
Dynamic interrupt lines
• Sometimes a peripheral is not assigned the dedicated IRQ line/no.
• They are dynamically assigned the IRQ line when they needs attention or accessed.
• Such an IRQ line/no is said to be dynamic.
• When device is being serviced on the dynamic IRQ line, IRQ line cannot be used to service other device at the same time
• IRQ line is assigned to service another device when the current device has been serviced and IRQ line is declared as available.
Interrupt handling flow diagram
Vmalloc and ioremap
Skipping ….

Más contenido relacionado

La actualidad más candente

U-Boot Porting on New Hardware
U-Boot Porting on New HardwareU-Boot Porting on New Hardware
U-Boot Porting on New HardwareRuggedBoardGroup
 
Linux Initialization Process (2)
Linux Initialization Process (2)Linux Initialization Process (2)
Linux Initialization Process (2)shimosawa
 
Arm device tree and linux device drivers
Arm device tree and linux device driversArm device tree and linux device drivers
Arm device tree and linux device driversHoucheng Lin
 
Introduction Linux Device Drivers
Introduction Linux Device DriversIntroduction Linux Device Drivers
Introduction Linux Device DriversNEEVEE Technologies
 
Linux Serial Driver
Linux Serial DriverLinux Serial Driver
Linux Serial Driver艾鍗科技
 
Part 02 Linux Kernel Module Programming
Part 02 Linux Kernel Module ProgrammingPart 02 Linux Kernel Module Programming
Part 02 Linux Kernel Module ProgrammingTushar B Kute
 
Jagan Teki - U-boot from scratch
Jagan Teki - U-boot from scratchJagan Teki - U-boot from scratch
Jagan Teki - U-boot from scratchlinuxlab_conf
 
U-Boot presentation 2013
U-Boot presentation  2013U-Boot presentation  2013
U-Boot presentation 2013Wave Digitech
 
Kernel Recipes 2017 - An introduction to the Linux DRM subsystem - Maxime Ripard
Kernel Recipes 2017 - An introduction to the Linux DRM subsystem - Maxime RipardKernel Recipes 2017 - An introduction to the Linux DRM subsystem - Maxime Ripard
Kernel Recipes 2017 - An introduction to the Linux DRM subsystem - Maxime RipardAnne Nicolas
 
U boot porting guide for SoC
U boot porting guide for SoCU boot porting guide for SoC
U boot porting guide for SoCMacpaul Lin
 
Linux Porting to a Custom Board
Linux Porting to a Custom BoardLinux Porting to a Custom Board
Linux Porting to a Custom BoardPatrick Bellasi
 

La actualidad más candente (20)

I2C Drivers
I2C DriversI2C Drivers
I2C Drivers
 
U-Boot Porting on New Hardware
U-Boot Porting on New HardwareU-Boot Porting on New Hardware
U-Boot Porting on New Hardware
 
Linux Initialization Process (2)
Linux Initialization Process (2)Linux Initialization Process (2)
Linux Initialization Process (2)
 
Hands-on ethernet driver
Hands-on ethernet driverHands-on ethernet driver
Hands-on ethernet driver
 
Arm device tree and linux device drivers
Arm device tree and linux device driversArm device tree and linux device drivers
Arm device tree and linux device drivers
 
Introduction Linux Device Drivers
Introduction Linux Device DriversIntroduction Linux Device Drivers
Introduction Linux Device Drivers
 
Linux Serial Driver
Linux Serial DriverLinux Serial Driver
Linux Serial Driver
 
Spi drivers
Spi driversSpi drivers
Spi drivers
 
BusyBox for Embedded Linux
BusyBox for Embedded LinuxBusyBox for Embedded Linux
BusyBox for Embedded Linux
 
Part 02 Linux Kernel Module Programming
Part 02 Linux Kernel Module ProgrammingPart 02 Linux Kernel Module Programming
Part 02 Linux Kernel Module Programming
 
Linux Internals - Part II
Linux Internals - Part IILinux Internals - Part II
Linux Internals - Part II
 
Jagan Teki - U-boot from scratch
Jagan Teki - U-boot from scratchJagan Teki - U-boot from scratch
Jagan Teki - U-boot from scratch
 
Character drivers
Character driversCharacter drivers
Character drivers
 
Linux I2C
Linux I2CLinux I2C
Linux I2C
 
U-Boot presentation 2013
U-Boot presentation  2013U-Boot presentation  2013
U-Boot presentation 2013
 
Video Drivers
Video DriversVideo Drivers
Video Drivers
 
Kernel Recipes 2017 - An introduction to the Linux DRM subsystem - Maxime Ripard
Kernel Recipes 2017 - An introduction to the Linux DRM subsystem - Maxime RipardKernel Recipes 2017 - An introduction to the Linux DRM subsystem - Maxime Ripard
Kernel Recipes 2017 - An introduction to the Linux DRM subsystem - Maxime Ripard
 
U boot porting guide for SoC
U boot porting guide for SoCU boot porting guide for SoC
U boot porting guide for SoC
 
Linux Porting to a Custom Board
Linux Porting to a Custom BoardLinux Porting to a Custom Board
Linux Porting to a Custom Board
 
Embedded Android : System Development - Part IV
Embedded Android : System Development - Part IVEmbedded Android : System Development - Part IV
Embedded Android : System Development - Part IV
 

Destacado

Is it important to explain a theorem? A case study in UML and ALCQI
Is it important to explain a theorem? A case study in UML and ALCQIIs it important to explain a theorem? A case study in UML and ALCQI
Is it important to explain a theorem? A case study in UML and ALCQIAlexandre Rademaker
 
Анализа на оддалечена експлоатациjа во Linux кернел
Анализа на оддалечена експлоатациjа во Linux кернелАнализа на оддалечена експлоатациjа во Linux кернел
Анализа на оддалечена експлоатациjа во Linux кернелZero Science Lab
 
Secrets of building a debuggable runtime: Learn how language implementors sol...
Secrets of building a debuggable runtime: Learn how language implementors sol...Secrets of building a debuggable runtime: Learn how language implementors sol...
Secrets of building a debuggable runtime: Learn how language implementors sol...Dev_Events
 
Evaluation of Container Virtualized MEGADOCK System in Distributed Computing ...
Evaluation of Container Virtualized MEGADOCK System in Distributed Computing ...Evaluation of Container Virtualized MEGADOCK System in Distributed Computing ...
Evaluation of Container Virtualized MEGADOCK System in Distributed Computing ...Kento Aoyama
 
Exascale Computing Project - Driving a HUGE Change in a Changing World
Exascale Computing Project - Driving a HUGE Change in a Changing WorldExascale Computing Project - Driving a HUGE Change in a Changing World
Exascale Computing Project - Driving a HUGE Change in a Changing Worldinside-BigData.com
 
Linux Kernel Introduction
Linux Kernel IntroductionLinux Kernel Introduction
Linux Kernel IntroductionSage Sharp
 
TMPA-2017: Dl-Check: Dynamic Potential Deadlock Detection Tool for Java Programs
TMPA-2017: Dl-Check: Dynamic Potential Deadlock Detection Tool for Java ProgramsTMPA-2017: Dl-Check: Dynamic Potential Deadlock Detection Tool for Java Programs
TMPA-2017: Dl-Check: Dynamic Potential Deadlock Detection Tool for Java ProgramsIosif Itkin
 
Linux Device Driver parallelism using SMP and Kernel Pre-emption
Linux Device Driver parallelism using SMP and Kernel Pre-emptionLinux Device Driver parallelism using SMP and Kernel Pre-emption
Linux Device Driver parallelism using SMP and Kernel Pre-emptionHemanth Venkatesh
 
Linux Kernel Tour
Linux Kernel TourLinux Kernel Tour
Linux Kernel Toursamrat das
 
Introduction To Linux Kernel Modules
Introduction To Linux Kernel ModulesIntroduction To Linux Kernel Modules
Introduction To Linux Kernel Modulesdibyajyotig
 
Disaster Recovery and Ceph Block Storage: Introducing Multi-Site Mirroring
Disaster Recovery and Ceph Block Storage: Introducing Multi-Site MirroringDisaster Recovery and Ceph Block Storage: Introducing Multi-Site Mirroring
Disaster Recovery and Ceph Block Storage: Introducing Multi-Site MirroringJason Dillaman
 
1. numPYNQ - Project Presentation
1. numPYNQ - Project Presentation1. numPYNQ - Project Presentation
1. numPYNQ - Project PresentationnumPYNQ
 

Destacado (20)

Report
ReportReport
Report
 
My spy Roger Pérez
My spy Roger PérezMy spy Roger Pérez
My spy Roger Pérez
 
Is it important to explain a theorem? A case study in UML and ALCQI
Is it important to explain a theorem? A case study in UML and ALCQIIs it important to explain a theorem? A case study in UML and ALCQI
Is it important to explain a theorem? A case study in UML and ALCQI
 
Анализа на оддалечена експлоатациjа во Linux кернел
Анализа на оддалечена експлоатациjа во Linux кернелАнализа на оддалечена експлоатациjа во Linux кернел
Анализа на оддалечена експлоатациjа во Linux кернел
 
Secrets of building a debuggable runtime: Learn how language implementors sol...
Secrets of building a debuggable runtime: Learn how language implementors sol...Secrets of building a debuggable runtime: Learn how language implementors sol...
Secrets of building a debuggable runtime: Learn how language implementors sol...
 
Evaluation of Container Virtualized MEGADOCK System in Distributed Computing ...
Evaluation of Container Virtualized MEGADOCK System in Distributed Computing ...Evaluation of Container Virtualized MEGADOCK System in Distributed Computing ...
Evaluation of Container Virtualized MEGADOCK System in Distributed Computing ...
 
Case Study Uml
Case Study UmlCase Study Uml
Case Study Uml
 
RDMA on ARM
RDMA on ARMRDMA on ARM
RDMA on ARM
 
Exascale Computing Project - Driving a HUGE Change in a Changing World
Exascale Computing Project - Driving a HUGE Change in a Changing WorldExascale Computing Project - Driving a HUGE Change in a Changing World
Exascale Computing Project - Driving a HUGE Change in a Changing World
 
Ceph Object Store
Ceph Object StoreCeph Object Store
Ceph Object Store
 
Linux Kernel Introduction
Linux Kernel IntroductionLinux Kernel Introduction
Linux Kernel Introduction
 
TMPA-2017: Dl-Check: Dynamic Potential Deadlock Detection Tool for Java Programs
TMPA-2017: Dl-Check: Dynamic Potential Deadlock Detection Tool for Java ProgramsTMPA-2017: Dl-Check: Dynamic Potential Deadlock Detection Tool for Java Programs
TMPA-2017: Dl-Check: Dynamic Potential Deadlock Detection Tool for Java Programs
 
Linux Device Driver parallelism using SMP and Kernel Pre-emption
Linux Device Driver parallelism using SMP and Kernel Pre-emptionLinux Device Driver parallelism using SMP and Kernel Pre-emption
Linux Device Driver parallelism using SMP and Kernel Pre-emption
 
Linux Kernel Tour
Linux Kernel TourLinux Kernel Tour
Linux Kernel Tour
 
Introduction To Linux Kernel Modules
Introduction To Linux Kernel ModulesIntroduction To Linux Kernel Modules
Introduction To Linux Kernel Modules
 
Component Diagram
Component DiagramComponent Diagram
Component Diagram
 
Disaster Recovery and Ceph Block Storage: Introducing Multi-Site Mirroring
Disaster Recovery and Ceph Block Storage: Introducing Multi-Site MirroringDisaster Recovery and Ceph Block Storage: Introducing Multi-Site Mirroring
Disaster Recovery and Ceph Block Storage: Introducing Multi-Site Mirroring
 
Linux kernel architecture
Linux kernel architectureLinux kernel architecture
Linux kernel architecture
 
1. numPYNQ - Project Presentation
1. numPYNQ - Project Presentation1. numPYNQ - Project Presentation
1. numPYNQ - Project Presentation
 
Case tools
Case toolsCase tools
Case tools
 

Similar a Linux device drivers

Device Drivers and Running Modules
Device Drivers and Running ModulesDevice Drivers and Running Modules
Device Drivers and Running ModulesYourHelper1
 
Course 102: Lecture 25: Devices and Device Drivers
Course 102: Lecture 25: Devices and Device Drivers Course 102: Lecture 25: Devices and Device Drivers
Course 102: Lecture 25: Devices and Device Drivers Ahmed El-Arabawy
 
CNIT 126: 10: Kernel Debugging with WinDbg
CNIT 126: 10: Kernel Debugging with WinDbgCNIT 126: 10: Kernel Debugging with WinDbg
CNIT 126: 10: Kernel Debugging with WinDbgSam Bowne
 
CNIT 126: 10: Kernel Debugging with WinDbg
CNIT 126: 10: Kernel Debugging with WinDbgCNIT 126: 10: Kernel Debugging with WinDbg
CNIT 126: 10: Kernel Debugging with WinDbgSam Bowne
 
Practical Malware Analysis: Ch 10: Kernel Debugging with WinDbg
Practical Malware Analysis: Ch 10: Kernel Debugging with WinDbgPractical Malware Analysis: Ch 10: Kernel Debugging with WinDbg
Practical Malware Analysis: Ch 10: Kernel Debugging with WinDbgSam Bowne
 
Linux Char Device Driver
Linux Char Device DriverLinux Char Device Driver
Linux Char Device DriverGary Yeh
 
Kernel Module Programming
Kernel Module ProgrammingKernel Module Programming
Kernel Module ProgrammingSaurabh Bangad
 
Unit 6 Operating System TEIT Savitribai Phule Pune University by Tushar B Kute
Unit 6 Operating System TEIT Savitribai Phule Pune University by Tushar B KuteUnit 6 Operating System TEIT Savitribai Phule Pune University by Tushar B Kute
Unit 6 Operating System TEIT Savitribai Phule Pune University by Tushar B KuteTushar B Kute
 
Linux kernel modules
Linux kernel modulesLinux kernel modules
Linux kernel modulesEddy Reyes
 
Linux Device Driver,LDD,
Linux Device Driver,LDD,Linux Device Driver,LDD,
Linux Device Driver,LDD,Rahul Batra
 
Writing Character driver (loadable module) in linux
Writing Character driver (loadable module) in linuxWriting Character driver (loadable module) in linux
Writing Character driver (loadable module) in linuxRajKumar Rampelli
 
Char Drivers And Debugging Techniques
Char Drivers And Debugging TechniquesChar Drivers And Debugging Techniques
Char Drivers And Debugging TechniquesYourHelper1
 
Linux Kernel Development
Linux Kernel DevelopmentLinux Kernel Development
Linux Kernel DevelopmentPriyank Kapadia
 
Kernel module programming
Kernel module programmingKernel module programming
Kernel module programmingVandana Salve
 
System Device Tree and Lopper: Concrete Examples - ELC NA 2022
System Device Tree and Lopper: Concrete Examples - ELC NA 2022System Device Tree and Lopper: Concrete Examples - ELC NA 2022
System Device Tree and Lopper: Concrete Examples - ELC NA 2022Stefano Stabellini
 
Linux for embedded_systems
Linux for embedded_systemsLinux for embedded_systems
Linux for embedded_systemsVandana Salve
 

Similar a Linux device drivers (20)

Device Drivers
Device DriversDevice Drivers
Device Drivers
 
Device Drivers and Running Modules
Device Drivers and Running ModulesDevice Drivers and Running Modules
Device Drivers and Running Modules
 
Course 102: Lecture 25: Devices and Device Drivers
Course 102: Lecture 25: Devices and Device Drivers Course 102: Lecture 25: Devices and Device Drivers
Course 102: Lecture 25: Devices and Device Drivers
 
CNIT 126: 10: Kernel Debugging with WinDbg
CNIT 126: 10: Kernel Debugging with WinDbgCNIT 126: 10: Kernel Debugging with WinDbg
CNIT 126: 10: Kernel Debugging with WinDbg
 
CNIT 126: 10: Kernel Debugging with WinDbg
CNIT 126: 10: Kernel Debugging with WinDbgCNIT 126: 10: Kernel Debugging with WinDbg
CNIT 126: 10: Kernel Debugging with WinDbg
 
Practical Malware Analysis: Ch 10: Kernel Debugging with WinDbg
Practical Malware Analysis: Ch 10: Kernel Debugging with WinDbgPractical Malware Analysis: Ch 10: Kernel Debugging with WinDbg
Practical Malware Analysis: Ch 10: Kernel Debugging with WinDbg
 
Linux Char Device Driver
Linux Char Device DriverLinux Char Device Driver
Linux Char Device Driver
 
Kernel Module Programming
Kernel Module ProgrammingKernel Module Programming
Kernel Module Programming
 
Unit 6 Operating System TEIT Savitribai Phule Pune University by Tushar B Kute
Unit 6 Operating System TEIT Savitribai Phule Pune University by Tushar B KuteUnit 6 Operating System TEIT Savitribai Phule Pune University by Tushar B Kute
Unit 6 Operating System TEIT Savitribai Phule Pune University by Tushar B Kute
 
Linux kernel modules
Linux kernel modulesLinux kernel modules
Linux kernel modules
 
Linux Device Driver,LDD,
Linux Device Driver,LDD,Linux Device Driver,LDD,
Linux Device Driver,LDD,
 
Writing Character driver (loadable module) in linux
Writing Character driver (loadable module) in linuxWriting Character driver (loadable module) in linux
Writing Character driver (loadable module) in linux
 
Embedded system - embedded system programming
Embedded system - embedded system programmingEmbedded system - embedded system programming
Embedded system - embedded system programming
 
Char Drivers And Debugging Techniques
Char Drivers And Debugging TechniquesChar Drivers And Debugging Techniques
Char Drivers And Debugging Techniques
 
LINUX Device Drivers
LINUX Device DriversLINUX Device Drivers
LINUX Device Drivers
 
Linux Kernel Development
Linux Kernel DevelopmentLinux Kernel Development
Linux Kernel Development
 
Kernel module programming
Kernel module programmingKernel module programming
Kernel module programming
 
Studienarb linux kernel-dev
Studienarb linux kernel-devStudienarb linux kernel-dev
Studienarb linux kernel-dev
 
System Device Tree and Lopper: Concrete Examples - ELC NA 2022
System Device Tree and Lopper: Concrete Examples - ELC NA 2022System Device Tree and Lopper: Concrete Examples - ELC NA 2022
System Device Tree and Lopper: Concrete Examples - ELC NA 2022
 
Linux for embedded_systems
Linux for embedded_systemsLinux for embedded_systems
Linux for embedded_systems
 

Último

Hand gesture recognition PROJECT PPT.pptx
Hand gesture recognition PROJECT PPT.pptxHand gesture recognition PROJECT PPT.pptx
Hand gesture recognition PROJECT PPT.pptxbodapatigopi8531
 
Shapes for Sharing between Graph Data Spaces - and Epistemic Querying of RDF-...
Shapes for Sharing between Graph Data Spaces - and Epistemic Querying of RDF-...Shapes for Sharing between Graph Data Spaces - and Epistemic Querying of RDF-...
Shapes for Sharing between Graph Data Spaces - and Epistemic Querying of RDF-...Steffen Staab
 
Software Quality Assurance Interview Questions
Software Quality Assurance Interview QuestionsSoftware Quality Assurance Interview Questions
Software Quality Assurance Interview QuestionsArshad QA
 
Learn the Fundamentals of XCUITest Framework_ A Beginner's Guide.pdf
Learn the Fundamentals of XCUITest Framework_ A Beginner's Guide.pdfLearn the Fundamentals of XCUITest Framework_ A Beginner's Guide.pdf
Learn the Fundamentals of XCUITest Framework_ A Beginner's Guide.pdfkalichargn70th171
 
How To Use Server-Side Rendering with Nuxt.js
How To Use Server-Side Rendering with Nuxt.jsHow To Use Server-Side Rendering with Nuxt.js
How To Use Server-Side Rendering with Nuxt.jsAndolasoft Inc
 
Optimizing AI for immediate response in Smart CCTV
Optimizing AI for immediate response in Smart CCTVOptimizing AI for immediate response in Smart CCTV
Optimizing AI for immediate response in Smart CCTVshikhaohhpro
 
The Ultimate Test Automation Guide_ Best Practices and Tips.pdf
The Ultimate Test Automation Guide_ Best Practices and Tips.pdfThe Ultimate Test Automation Guide_ Best Practices and Tips.pdf
The Ultimate Test Automation Guide_ Best Practices and Tips.pdfkalichargn70th171
 
How To Troubleshoot Collaboration Apps for the Modern Connected Worker
How To Troubleshoot Collaboration Apps for the Modern Connected WorkerHow To Troubleshoot Collaboration Apps for the Modern Connected Worker
How To Troubleshoot Collaboration Apps for the Modern Connected WorkerThousandEyes
 
Right Money Management App For Your Financial Goals
Right Money Management App For Your Financial GoalsRight Money Management App For Your Financial Goals
Right Money Management App For Your Financial GoalsJhone kinadey
 
CALL ON ➥8923113531 🔝Call Girls Badshah Nagar Lucknow best Female service
CALL ON ➥8923113531 🔝Call Girls Badshah Nagar Lucknow best Female serviceCALL ON ➥8923113531 🔝Call Girls Badshah Nagar Lucknow best Female service
CALL ON ➥8923113531 🔝Call Girls Badshah Nagar Lucknow best Female serviceanilsa9823
 
HR Software Buyers Guide in 2024 - HRSoftware.com
HR Software Buyers Guide in 2024 - HRSoftware.comHR Software Buyers Guide in 2024 - HRSoftware.com
HR Software Buyers Guide in 2024 - HRSoftware.comFatema Valibhai
 
TECUNIQUE: Success Stories: IT Service provider
TECUNIQUE: Success Stories: IT Service providerTECUNIQUE: Success Stories: IT Service provider
TECUNIQUE: Success Stories: IT Service providermohitmore19
 
The Real-World Challenges of Medical Device Cybersecurity- Mitigating Vulnera...
The Real-World Challenges of Medical Device Cybersecurity- Mitigating Vulnera...The Real-World Challenges of Medical Device Cybersecurity- Mitigating Vulnera...
The Real-World Challenges of Medical Device Cybersecurity- Mitigating Vulnera...ICS
 
Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...
Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...
Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...harshavardhanraghave
 
Unlocking the Future of AI Agents with Large Language Models
Unlocking the Future of AI Agents with Large Language ModelsUnlocking the Future of AI Agents with Large Language Models
Unlocking the Future of AI Agents with Large Language Modelsaagamshah0812
 
CALL ON ➥8923113531 🔝Call Girls Kakori Lucknow best sexual service Online ☂️
CALL ON ➥8923113531 🔝Call Girls Kakori Lucknow best sexual service Online  ☂️CALL ON ➥8923113531 🔝Call Girls Kakori Lucknow best sexual service Online  ☂️
CALL ON ➥8923113531 🔝Call Girls Kakori Lucknow best sexual service Online ☂️anilsa9823
 
Tech Tuesday-Harness the Power of Effective Resource Planning with OnePlan’s ...
Tech Tuesday-Harness the Power of Effective Resource Planning with OnePlan’s ...Tech Tuesday-Harness the Power of Effective Resource Planning with OnePlan’s ...
Tech Tuesday-Harness the Power of Effective Resource Planning with OnePlan’s ...OnePlan Solutions
 
Try MyIntelliAccount Cloud Accounting Software As A Service Solution Risk Fre...
Try MyIntelliAccount Cloud Accounting Software As A Service Solution Risk Fre...Try MyIntelliAccount Cloud Accounting Software As A Service Solution Risk Fre...
Try MyIntelliAccount Cloud Accounting Software As A Service Solution Risk Fre...MyIntelliSource, Inc.
 

Último (20)

Hand gesture recognition PROJECT PPT.pptx
Hand gesture recognition PROJECT PPT.pptxHand gesture recognition PROJECT PPT.pptx
Hand gesture recognition PROJECT PPT.pptx
 
Shapes for Sharing between Graph Data Spaces - and Epistemic Querying of RDF-...
Shapes for Sharing between Graph Data Spaces - and Epistemic Querying of RDF-...Shapes for Sharing between Graph Data Spaces - and Epistemic Querying of RDF-...
Shapes for Sharing between Graph Data Spaces - and Epistemic Querying of RDF-...
 
Software Quality Assurance Interview Questions
Software Quality Assurance Interview QuestionsSoftware Quality Assurance Interview Questions
Software Quality Assurance Interview Questions
 
Learn the Fundamentals of XCUITest Framework_ A Beginner's Guide.pdf
Learn the Fundamentals of XCUITest Framework_ A Beginner's Guide.pdfLearn the Fundamentals of XCUITest Framework_ A Beginner's Guide.pdf
Learn the Fundamentals of XCUITest Framework_ A Beginner's Guide.pdf
 
How To Use Server-Side Rendering with Nuxt.js
How To Use Server-Side Rendering with Nuxt.jsHow To Use Server-Side Rendering with Nuxt.js
How To Use Server-Side Rendering with Nuxt.js
 
Optimizing AI for immediate response in Smart CCTV
Optimizing AI for immediate response in Smart CCTVOptimizing AI for immediate response in Smart CCTV
Optimizing AI for immediate response in Smart CCTV
 
The Ultimate Test Automation Guide_ Best Practices and Tips.pdf
The Ultimate Test Automation Guide_ Best Practices and Tips.pdfThe Ultimate Test Automation Guide_ Best Practices and Tips.pdf
The Ultimate Test Automation Guide_ Best Practices and Tips.pdf
 
Vip Call Girls Noida ➡️ Delhi ➡️ 9999965857 No Advance 24HRS Live
Vip Call Girls Noida ➡️ Delhi ➡️ 9999965857 No Advance 24HRS LiveVip Call Girls Noida ➡️ Delhi ➡️ 9999965857 No Advance 24HRS Live
Vip Call Girls Noida ➡️ Delhi ➡️ 9999965857 No Advance 24HRS Live
 
How To Troubleshoot Collaboration Apps for the Modern Connected Worker
How To Troubleshoot Collaboration Apps for the Modern Connected WorkerHow To Troubleshoot Collaboration Apps for the Modern Connected Worker
How To Troubleshoot Collaboration Apps for the Modern Connected Worker
 
Right Money Management App For Your Financial Goals
Right Money Management App For Your Financial GoalsRight Money Management App For Your Financial Goals
Right Money Management App For Your Financial Goals
 
CALL ON ➥8923113531 🔝Call Girls Badshah Nagar Lucknow best Female service
CALL ON ➥8923113531 🔝Call Girls Badshah Nagar Lucknow best Female serviceCALL ON ➥8923113531 🔝Call Girls Badshah Nagar Lucknow best Female service
CALL ON ➥8923113531 🔝Call Girls Badshah Nagar Lucknow best Female service
 
HR Software Buyers Guide in 2024 - HRSoftware.com
HR Software Buyers Guide in 2024 - HRSoftware.comHR Software Buyers Guide in 2024 - HRSoftware.com
HR Software Buyers Guide in 2024 - HRSoftware.com
 
TECUNIQUE: Success Stories: IT Service provider
TECUNIQUE: Success Stories: IT Service providerTECUNIQUE: Success Stories: IT Service provider
TECUNIQUE: Success Stories: IT Service provider
 
The Real-World Challenges of Medical Device Cybersecurity- Mitigating Vulnera...
The Real-World Challenges of Medical Device Cybersecurity- Mitigating Vulnera...The Real-World Challenges of Medical Device Cybersecurity- Mitigating Vulnera...
The Real-World Challenges of Medical Device Cybersecurity- Mitigating Vulnera...
 
Microsoft AI Transformation Partner Playbook.pdf
Microsoft AI Transformation Partner Playbook.pdfMicrosoft AI Transformation Partner Playbook.pdf
Microsoft AI Transformation Partner Playbook.pdf
 
Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...
Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...
Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...
 
Unlocking the Future of AI Agents with Large Language Models
Unlocking the Future of AI Agents with Large Language ModelsUnlocking the Future of AI Agents with Large Language Models
Unlocking the Future of AI Agents with Large Language Models
 
CALL ON ➥8923113531 🔝Call Girls Kakori Lucknow best sexual service Online ☂️
CALL ON ➥8923113531 🔝Call Girls Kakori Lucknow best sexual service Online  ☂️CALL ON ➥8923113531 🔝Call Girls Kakori Lucknow best sexual service Online  ☂️
CALL ON ➥8923113531 🔝Call Girls Kakori Lucknow best sexual service Online ☂️
 
Tech Tuesday-Harness the Power of Effective Resource Planning with OnePlan’s ...
Tech Tuesday-Harness the Power of Effective Resource Planning with OnePlan’s ...Tech Tuesday-Harness the Power of Effective Resource Planning with OnePlan’s ...
Tech Tuesday-Harness the Power of Effective Resource Planning with OnePlan’s ...
 
Try MyIntelliAccount Cloud Accounting Software As A Service Solution Risk Fre...
Try MyIntelliAccount Cloud Accounting Software As A Service Solution Risk Fre...Try MyIntelliAccount Cloud Accounting Software As A Service Solution Risk Fre...
Try MyIntelliAccount Cloud Accounting Software As A Service Solution Risk Fre...
 

Linux device drivers

  • 2. Introduction • Each module is made up of object code (not linked into a complete executable) that can be dynamically linked to the running kernel by the insmod program and can be unlinked by the rmmod program. • Device Drivers are of three types: • Char device driver • Block device driver • Network device driver • Programmer can implement a driver which can be hybrid of above three. • Types od devices : • Character devices • A character (char) device is one that can be accessed as a stream of bytes (like a file) • a char driver is in charge of implementing this behavior. • Such a driver usually implements at least the open, close, read, and write system calls. • Example : /dev/consoles and /dev/ttyS0 • Block devices • A block device is a device (e.g., a disk) that can host a filesystem • a block device can only handle I/O operations that transfer one or more whole blocks, which are usually 512 bytes (or a larger power of two) bytes in length • Eg : Memory card
  • 3. Introduction • Network devices • Network interface cards • ASICs • Network processors • USB devices • Every USB device is driven by a USB module that works with the USB subsystem, but the device itself shows up in the system as a char device (a USB serial port, say), a block device (a USB memory card reader), or a network device (a USB Ethernet interface).
  • 4. Kernel programming basics • The exit fn of modules should release all resources else the resources would uselessly stay reserved in kernel until system reboot. • The only functions KM can call are the ones exported by the kernel; there are no libraries to link to • Only functions that are actually part of the kernel itself may be used in kernel modules. • Unix transfers execution from user space to kernel space whenever an application issues a system call or is suspended by a hardware interrupt • Linux systems run multiple processes, more than one of which can be trying to use your driver at the same time • Linux kernel code, including driver code, must be reentrant—it must be capable of running in more than one context at the same time. • If you do not write your driver code with concurrency in mind, it will be subject to catastrophic failures that can be exceedingly difficult to debug. • Kernel modules don’t execute sequentially, they are event driver • Kernel code (that is being executed in the context of the process) can refer to the current process by accessing the global item current, defined in <asm/current.h>, which yields a pointer to struct task_struct, defined by <linux/sched.h>. The current pointer refers to the process that is currently executing [21, LDD3] • A device driver can just include <linux/sched.h> and refer to the current process. Example below prints the process name and pid in kernel world. • printk doesn’t flush until a trailing newline is provided • Q : Is current a global variable ? printk(KERN_INFO "The process is "%s" (pid %i)n", current->comm, current->pid);
  • 5. Kernel programming basics • User space Applications are laid out in virtual memory with a very large stack area. • The kernel, instead, has a very small stack; it can be as small as a single, 4096-byte page. So, be careful to not write recursive calls in driver code Or refrain from creating auto large stack variables. • Kernel code cannot do floating point arithmetic. (Why ?) Loading and unloading modules • Insmod – load the KM into kernel. How it works in detail ? [25 LLD3] sys_init_module() - system call, allocates kernel memory to hold the module using vmalloc | copies module text into memory region | resolves kernel references in the module via the kernel symbol table (linking) | calls the module’s initialization function • modprobe – loads the kernel modules + all other kernel modules whose fns are referenced [25 LLD3] • modprobe looks only in the standard installed module directories • rmmod - remove KM from the kernel • module removal fails if the kernel believes that the module is still in use. For eg, application may still using KM.
  • 6. Kernel programming basics • We can configure kernel to allow “forced” removal of KM. Things may gone wrong however. • modprobe command can sometimes replace several invocations of insmod • lsmod - The lsmod program produces a list of the modules currently loaded in the kernel. • lsmod works by reading the /proc/modules virtual file. • Kernel Symbol table • The table contains the addresses of global kernel items—functions and variables • When a module is loaded, any symbol exported by the module becomes part of the kernel symbol table • You need to export symbols, however, whenever other modules may benefit from using them. • f your module needs to export symbols for other modules to use, the following macros should be used : • Either of the above macros makes the given symbol available outside the module. Vermagic.o An object file from the kernel source directory that describes the environment a module was built for. EXPORT_SYMBOL (name) EXPORTSYMBOL_GPL(name)
  • 8. Char device driver • Char devices are accessed through names in the filesystem in /dev dir, starting with letter “c” • Block devices, on the other hand, are represented as files whose names starting with letter “b” Major number and Minor Number • ls –l in /dev dir shows two numbers separated by comma. These are called major and minor no respectively. • Major number identifies the driver associated with the device. • Modern Linux kernels allow multiple drivers to share major numbers • The minor number is used by the kernel to determine exactly which device is being referred to • Minor no identifies the device being operated by driver. Major number and Minor Number Macros • dev_t (defined in <linux/types.h>) is a 32-bit quantity with 12 bits set aside for the major number and 20 for the minor number. • MAJOR(dev_t dev) – retrieve the major no • MINOR(dev_t dev) – retrieve the minor no. • Dev_t dev = MKDEV(int major, int minor) – creating a dev_t object out of major and minor nos. Allocating and freeing device numbers • Declared in <linux/fs.h> • Here, first is the beginning device number (Minor number) of the range you would like to allocate. int register_chrdev_region(dev_t first, unsigned int count, char *name);
  • 9. • The minor number portion of first is often 0, but there is no requirement to that effect. • count is the total number of contiguous device numbers you are requesting • Finally, name is the name of the device that should be associated with this number range • it will appear in /proc/devices and sysfs. • Return value is 0 if success, else negative error code. • register_chrdev_region works well if you know ahead of time exactly which device numbers you want. • however, you will not know which major numbers your device will use, hence there is a API to dynamically allocate a major and minor no to your driver and devices. The API is : • dev is an output-only parameter that will, on successful completion, hold the first number in your allocated range. • firstminor should be the requested first minor number to use; it is usually 0. • The count and name parameters work like those given to request_chrdev_region. Freeing the Major number and Minor Number • Regardless of how you allocate your device numbers, you should free them when they are no longer in use • Device numbers are freed with : • The usual place to call unregister_chrdev_region would be in your module’s cleanup function int alloc_chrdev_region(dev_t *dev, unsigned int firstminor, unsigned int count, char *name); void unregister_chrdev_region(dev_t first, unsigned int count)
  • 10. Dynamic allocation of Major and Minor numbers • Some major device numbers are statically assigned to the most common devices • As a driver writer, you have a choice: • you can simply pick a number that appears to be unused, or • you can allocate major numbers in a dynamic manner • Picking a number may work as long as the only user of your driver is you; • once your driver is more widely deployed, a randomly picked major number will lead to conflicts and trouble • Recommended to use dynamic allocation to obtain your major device number • To statically create a device in /dev dir for a device : mknod /dev/<device_name> c <major no> <minor no> Eg : mknod /dev/scull0 c 3 4 Below is the code snippet to show that user can chose to assign major and minor nos to driver by specifying the command line arg with insmod (scull_major)
  • 11. Device file • register_chrdev_region/alloc_chrdev_region() create a entry in /proc/devices file, It just tells that there exists a driver in kernel which is registered to perform I/O on a device file with major number. • For your driver to be fully operative, device file representing the hw should be created in /dev dir. The above two fn calls alone are not sufficient to create this entry in /dev dir • On unix, each piece of hardware is represented by a file located in /dev named a device file which provides the means to communicate with the hardware. • The device driver provides the communication on behalf of a user program • Manual Method of creating a device file in /dev • Once the driver is insmod-ed in kernel, you can manually create device file in /dev using mknod command • Use simple rm to remove the device file. • Eg : mknod /dev/myscull c 12 2, where 12 is major number, 2 is minor number • rm /dev/myscull • A device node created by mknod is just a file that contains a device major and minor number. When you access that file the first time, Linux looks for a driver that advertises that major/minor and loads it. Your driver then handles all I/O with that file.
  • 12. Device file [root@localhost dev]# cat /proc/devices Character devices: 1 mem 4 tty 4 ttyS 5 /dev/tty 5 /dev/console 5 /dev/ptmx 10 misc 21 sg 128 ptm 136 pts 250 myscull << tells that major number 250 has been reserved to be acted upon by the driver name myscull 251 uio 252 bsg 253 ptp 254 pps
  • 14. Dynamic allocation of Major and Minor numbers • So far, we have reserved some device numbers for our use – Major no to driver, and minor no to device • We need to connect of our driver’s operations to these numbers • We achieve this using file_operations structure defined in defined in <linux/fs.h> • This structure is a collection of function pointers • We need to register our device specific fns to these function pointers to perform device specific operations. • Device is represented by struct file structure in kernel • Listing of structure on next slide is taken from kernel 4.1.x
  • 15. Struct file_operations struct file_operations { struct module *owner; >> The first file_operations field is not an operation at all; it is a pointer to the module that “owns” the structure. This field is used to prevent the module from being unloaded while its operations are in use. Almost all the time, it is simply initialized to THIS_MODULE, a macro defined in <linux/module.h>. loff_t (*llseek) (struct file *, loff_t, int); >>The llseek method is used to change the current read/write position in a file, and the new position is returned as a (positive) return value. The loff_t parameter is a “long offset” and is at least 64 bits wide even on 32-bit platforms. Errors are signaled by a negative return value. If this function pointer is NULL, seek calls will modify the position counter in the file structure (described in the section “The file Structure”) in potentially unpredictable ways. ssize_t (*read) (struct file *, char __user *, size_t, loff_t *); >>Used to retrieve data from the device. A null pointer in this position causes the read system call to fail with -EINVAL (“Invalid argument”). A nonnegative return value represents the number of bytes successfully read (the return value is a “signed size” type, usually the native integer type for the target platform). ssize_t (*write) (struct file *, const char __user *, size_t, loff_t *); >>Sends data to the device. If NULL, -EINVAL is returned to the program calling the write system call. The return value, if nonnegative, represents the number of bytes successfully written. ssize_t (*read_iter) (struct kiocb *, struct iov_iter *); >> probably similar to fread() page 66 LDD3 ssize_t (*write_iter) (struct kiocb *, struct iov_iter *); >> probably similar to vector version of read/write, page 69 int (*iterate) (struct file *, struct dir_context *); unsigned int (*poll) (struct file *, struct poll_table_struct *); >> The poll method is the back end of three system calls: poll, epoll, and select, all of which are used to query whether a read or write to one or more file descriptors would block. The poll method should return a bit mask indicating whether nonblocking reads or writes are possible, and, possibly, provide the kernel with information that can be used to put the calling process to sleep until I/O becomes possible. If a driver leaves its poll method NULL, the device is assumed to be both readable and writable without blocking. long (*unlocked_ioctl) (struct file *, unsigned int, unsigned long); long (*compat_ioctl) (struct file *, unsigned int, unsigned long); >>The ioctl system call offers a way to issue device-specific commands (such as formatting a track of a floppy disk, which is neither reading nor writing). Additionally, a few ioctl commands are recognized by the kernel without referring to the fops table. If the device doesn’t provide an ioctl method, the system call returns an error for any request that isn’t predefined (-ENOTTY, “No such ioctl for device”). int (*mmap) (struct file *, struct vm_area_struct *); >> mmap is used to request a mapping of device memory to a process’s address space. If this method is NULL, the mmap system call returns -ENODEV.
  • 16. Struct file_operations int (*mremap)(struct file *, struct vm_area_struct *); int (*open) (struct inode *, struct file *); >> Though this is always the first operation performed on the device file, the driver is not required to declare a corresponding method. If this entry is NULL, opening the device always succeeds, but your driver isn’t notified. int (*flush) (struct file *, fl_owner_t id); >> The flush operation is invoked when a process closes its copy of a file descriptor for a device; it should execute (and wait for) any outstanding operations on the device. This must not be confused with the fsync operation requested by user programs. If flush is NULL, the kernel simply ignores the user application request. int (*release) (struct inode *, struct file *); >> This operation is invoked when the file structure is being released. Like open, release can be NULL.* int (*fsync) (struct file *, loff_t, loff_t, int datasync); >> This method is the back end of the fsync system call, which a user calls to flush any pending data. If this pointer is NULL, the system call returns –EINVAL. How it Is different from fflush() ? int (*aio_fsync) (struct kiocb *, int datasync); >> This is the asynchronous version of the fsync method. int (*fasync) (int, struct file *, int); >> This operation is used to notify the device of a change in its FASYNC flag. Asynchronous notification is an advanced topic and is described in Chapter 6 [LDD3]. The field can be NULL if the driver doesn’t support asynchronous notification. int (*lock) (struct file *, int, struct file_lock *); ssize_t (*sendpage) (struct file *, struct page *, int, size_t, loff_t *, int); >> sendpage is the other half of sendfile; it is called by the kernel to send data, one page at a time, to the corresponding file. Device drivers do not usually implement sendpage. unsigned long (*get_unmapped_area)(struct file *, unsigned long, unsigned long, unsigned long, unsigned long); >> The purpose of this method is to find a suitable location in the process’s address space to map in a memory segment on the underlying device. This task is normally performed by the memory management code; this method exists to allow drivers to enforce any alignment requirements a particular device may have. Most drivers can leave this method NULL.
  • 17. Struct file_operations int (*check_flags)(int); int (*flock) (struct file *, int, struct file_lock *); ssize_t (*splice_write)(struct pipe_inode_info *, struct file *, loff_t *, size_t, unsigned int); ssize_t (*splice_read)(struct file *, loff_t *, struct pipe_inode_info *, size_t, unsigned int); int (*setlease)(struct file *, long, struct file_lock **, void **); long (*fallocate)(struct file *file, int mode, loff_t offset, loff_t len); void (*show_fdinfo)(struct seq_file *m, struct file *f); #ifndef CONFIG_MMU unsigned (*mmap_capabilities)(struct file *); #endif };
  • 19. Struct file • struct file, defined in <linux/fs.h>, is the second most important data structure used in device drivers. • The file structure represents an open file. Only the first open() system call creates this structure in kernel. • Fork()/dup() do not create this structure in kernel, only open() does that. They just increment the counter in the existing structure. If n process calls open, in kernel n struct *file will be created, i-node stays unique and one. • It is not specific to device drivers; every open file in the system has an associated struct file in kernel space. • It is created by the kernel on open and is passed to any function that operates on the file, until the last close. • After all instances of the file are closed, the kernel releases the data structure • Details on fields of structure is underneath. • drivers never create file structures; they only access structures created elsewhere. • file structure represents an open file descriptor, and not a device (file) struct file { struct list_head f_list; struct dentry *f_dentry; >> The directory entry (dentry) structure associated with the file. Device driver writers normally need not concern themselves with dentry structures, other than to access the inode structure as filp->f_dentry->d_inode. struct vfsmount *f_vfsmnt; struct file_operations *f_op; >> The operations associated with the file. The value in filp->f_op is never saved by the kernel for later reference; this means that you can change the file operations associated with your file, and the new methods will be effective after you return to the caller. The kernel assigns the pointer as part of its implementation of open. atomic_t f_count; unsigned int f_flags; >> These are the file flags, such as O_RDONLY, O_NONBLOCK, and O_SYNC. A driver should check the O_NONBLOCK flag to see if nonblocking operation has been requested , the other flags are seldom used. In particular, read/write permission should be checked using f_mode rather than f_flags. All the flags are defined in the header <linux/fcntl.h>
  • 20. Struct file mode_t f_mode; >> The file mode identifies the file as either readable or writable (or both), by means of the bits FMODE_READ and FMODE_WRITE. You might want to check this field for read/write permission in your open or ioctl function, but you don’t need to check permissions for read and write, because the kernel checks before invoking your method. An attempt to read or write when the file has not been opened for that type of access is rejected without the driver even knowing about it. Int f_error; loff_t f_pos; >> The current reading or writing position. loff_t is a 64-bit value on all platforms (long long in gcc terminology). The driver can read this value if it needs to know the current position in the file but should not normally change it; read and write should update a position using the pointer they receive as the last argument instead of acting on filp->f_pos directly. The one exception to this rule is in the llseek method, the purpose of which is to change the file position. struct fown_struct f_owner; unsigned int f_uid, f_gid; struct file_ra_state f_ra; unsigned long f_version; void *f_security; /* needed for tty driver, and maybe others */ void *private_data; >> The open system call sets this pointer to NULL before calling the open method for the driver. You are free to make its own use of the field or to ignore it; you can use the field to point to allocated data, but then you must remember to free that memory in the release method before the file structure is destroyed by the kernel. private_data is a useful resource for preserving state information across system calls and is used by most of our sample modules. #ifdef CONFIG_EPOLL /* Used by fs/eventpoll.c to link all the hooks to this file */ struct list_head f_ep_links; spinlock_t f_ep_lock; #endif /* #ifdef CONFIG_EPOLL */ struct address_space *f_mapping; };
  • 22. struct inode • Inode basics in general • An Inode is a data structure that stores the following information about a file in general • Size of file • Device ID • User ID of the file • Group ID of the file • The file mode information and access privileges for owner, group and others • File protection flags • The timestamps for file creation, modification etc • link counter to determine the number of hard links Pointers to the blocks storing file’s contents Pls Note that the name of the file represented by inode is not stored in Inodes. Kernel maintains a table called inode table which stores the mapping of file name with its corresponding inode number The reason for separating out file name from the other information related to same file is for maintaining hard- links to files. This means that once all the other information is separated out from the file name then we can have various file names which point to same Inode. http://www.thegeekstuff.com/2012/01/linux-inodes/
  • 23. struct inode • The inode structure is used by the kernel internally to represent files. • It is different from the file structure that represents an open file descriptor • There can be numerous file structures representing multiple open descriptors on a single file (device), but they all point to a single inode structure. • space for Inodes is allocated when the operating system or a new file system is installed and when it does its initial structuring – register_chrdev_region()/alloc_chrdev_region() • As a general rule, only two fields of this structure are of interest for writing driver code • dev_t i_rdev - For inodes that represent device files, this field contains the actual device number. • struct cdev *i_cdev - struct cdev is the kernel’s internal structure that represents char devices; this field contains a pointer to that structure when the inode refers to a char device file • Two macros that can be used to obtain the major and minor number from an inode. These macros should be used instead of manipulating i_rdev directly. • unsigned int iminor(struct inode *inode); • unsigned int imajor(struct inode *inode); • Display inode number : • ls –i • Df –i • Stat <file name>
  • 25. struct cdev and Character device registration • The kernel uses structures of type struct cdev to represent char devices internally. • Before the kernel invokes your device’s operations, you must allocate and register one or more of these structures • your code should include <linux/cdev.h>, where the structure and its associated helper functions are defined struct cdev { struct kobject kobj; struct module *owner; // set to THIS_MODULE struct file_operations *ops; struct list_head list; dev_t dev; unsigned int count; }; • Two way of allocating and initializing struct cdev structures: • struct cdev *my_cdev = cdev_alloc( ); my_cdev->ops = &my_fops; • void cdev_init(struct cdev *cdev, struct file_operations *fops); • Once the cdev structure is set up, the final step is to tell the kernel about it with a call to: • int cdev_add(struct cdev *dev, dev_t num, unsigned int count); Here, dev is the cdev structure, num is the first device number to which this device responds, and count is the number of device numbers that should be associated with the device.
  • 26. Struct cdev and Character device registration • There are a couple of important things to keep in mind when using cdev_add. • The first is that this call can fail. If it returns a negative error code, your device has not been added to the system. It almost always succeeds • however, and that brings up the other point: as soon as cdev_add returns, your device is “live” and its operations can be called by the kernel. • You should not call cdev_add until your driver is completely ready to handle operations on the device. • cdev_add() is used to make an association of the ‘struct file_operations *’ and the major, minor number region De-Registration • To remove a char device from the system, call: • void cdev_del(struct cdev *dev);
  • 27. Steps to register a character device driver with kernel 1. register_chrdev_region/alloc_chrdev_region • tell the kernel to reserve the major and minor number range and driver name to act on device possessing these major/minor numbers • This fn creates entry in of the driver in /proc/devices file, but not in /dev dir. • This fn is just for the purpose of reserving major/minor numbers in kernel. 2. create a struct file_operations structure and register device specific callbacks 3. cdev_init(struct cdev *, struct file_operations *) • Initialize character device driver structure with its operations 4. Bind the file operations with the char device • cdev.ops = &fops; 5. Add the character device file to the kernel along with major and minor number. This will create the struct inode and struct file objects for the device in the kernel. However, you device would still not be created in /dev dir. cdev_add (struct cdev *, dev_t , 1); 6. use mknod command to manually create the device file in /dev dir. 7. For dynamic creation of device file in /dev dir … ??
  • 28. Steps to de-register a character device driver with kernel
  • 29. Association Diagram among kernel structures struct cdev * struct inode * struct file_operations * struct file * inode->i_cdev = cdev filep->private_data = cdev cdev->ops = file_operationsHolds the info abt files Represents char device internally in linux Represents open file descriptor in kernel space Filep->f_op Represents set of opn to be performed on a file dev_t devcdev->dev = dev
  • 32. Open Vs Close System calls on a kernel object 1. When an Open() system call is called on a kernel object (device file) , kernel creates an open file descriptor structure : struct file * 2. Always, a new struct file * is allocated in kernel every time an open() sys call is issued on a device file. 3. Objective is, for each open(), there should one and only one release method invocation in driver. 4. If a process has issues n open() sys calls on a device file/kernel object, n struct file structures are created in kernel. 5. Now, if a process is fork()/dup(), no new struct file are created in kernel. Only the reference count of existing struct file structures are incremented. This reference count denotes the number of processes sharing the file descriptor. 6. When close() is issued, reference count of struct file is decremented. If it reaches 0, then release method is invoked in driver. 7. This is how it is ensured that for each open(), there is only one and one release() invocation.
  • 33. Open Vs Close System calls on a kernel object
  • 34. char __user *buff 1. In kernel, char __ user * is a pointer to the virtual address of the process. It means, we can access the process virtual address directly in kernel space. 2. We should not directly dereference the user space address in kernel code for the following reasons: 1. The current page of process may not be the resident in main memory when kernel code attempts to access that user space address, leading to page fault. Kernel code should not generate page faults, if it does, it results in process termination in the context of which the kernel system call was made. 2. The pointer in question has been supplied by a user program, which could be buggy or malicious. If your driver ever blindly dereferences a user-supplied pointer, it provides an open doorway allowing a user-space program to access or overwrite memory anywhere in the system. If you do not wish to be responsible for compromising the security of your users’ systems, you cannot ever dereference a user-space pointer directly. 3. Obviously, your driver must be able to access the user-space buffer in order to get its job done. This access must always be performed by special, kernel-supplied below functions, however, in order to be safe. These fns are defined in <linux/uaccess.h> Note : copy_from_user/copy_to_user are a preemptive call. unsigned long copy_to_user(void __user *to, const void *from, unsigned long count); unsigned long copy_from_user(void *to, const void __user *from, unsigned long count);
  • 35. char __user *buff 4. Although these functions behave like normal memcpy functions, a little extra care must be used when accessing user space from kernel code. • The user pages being addressed might not be currently present in memory, and the virtual memory subsystem can put the process to sleep while the page is being transferred into place. • This happens, for example, when the page must be retrieved from swap space. It means, driver is trying to access the virtual address of the process which may be in sleep state (due to page fault). This in turn put driver to momentarily sleep. • The net result for the driver writer is that any function that accesses user space must be reentrant, must be able to execute concurrently with other driver functions, and, in particular, must be in a position where it can legally sleep. • The role of the two functions is not limited to copying data to and from user-space • they also check whether the user space pointer is valid. If the pointer is invalid, no copy is performed; • if an invalid address is encountered during the copy, on the other hand, only part of the data is copied. • In both cases, the return value is the amount of memory still to be copied. • Be careful, if you do not check a user-space pointer that you pass to these functions, then you can create kernel crashes and/or security holes. • More detail on this in chapter 5 and 6 of LDD3 book.
  • 36. Operations on user space address (char __user *buff) in kernel land access_ok() • Page 142 , LDD3 … put_user(datum, ptr) get_user(local, ptr) • page 143, LDD3
  • 37. read System Call • The return value for read is interpreted by the calling application program: • If the value equals the count argument passed to the read system call, the requested number of bytes has been transferred. This is the optimal case. • If the value is positive, but smaller than count, only part of the data has been transferred. This may happen for a number of reasons, depending on the device. Most often, the application program retries the read. • if you read using the fread function, the library function reissues the system call until completion of the requested data transfer. In kernel, the counter part of fread is read_iter() • If the value is 0, end-of-file was reached (and no data was read). • A negative value means there was an error. The value specifies what the error was, according to <linux/errno.h>. • What is missing from the preceding list is the case of “there is no data, but it may arrive later.” In this case, the read system call should block. We’ll deal with blocking input in Chapter 6.
  • 38. write System Call • write, like read, can transfer less data than was requested, according to the following rules for the return value: • If the value equals count, the requested number of bytes has been transferred. • If the value is positive, but smaller than count, only part of the data has been transferred. The program will most likely retry writing the rest of the data. • If the value is 0, nothing was written. This result is not an error, and there is no reason to return an error code. Once again, the standard library retries the call to write. • We’ll examine the exact meaning of this case in Chapter 6, where blocking write is introduced. • A negative value means an error occurred; as for read, valid error values are those defined in <linux/errno.h>. • some programmers are accustomed to seeing write calls that either fail or succeed completely, i.e. partial write is not supported.
  • 39. errno • Both the read and write methods (Or any other method) return a negative value if an error occurs. • A return value greater than or equal to 0, instead, tells the calling user space program how many bytes have been successfully transferred. • If some data is transferred correctly and then an error happens, the return value must be the count of bytes successfully transferred • kernel read and write implementations return a negative number to signal an error, and the value of the number indicates the kind of error that occurred • However, programs that run in user space always see –1 as the error return value • They need to access the errno variable to find out what happened.
  • 40. Concurrency problem • There could be multiple processes operating on the same device, issuing the arbitrary system calls to perform operation on a device. • Lets say, process A is reading a device memory using read(), at offset 50, and reading 10 bytes. • Meanwhile, process B opens the device using open() system call. Lets open() trim/truncate the device, updating the current position pointer in device memory to zero. • At this point, process A would see end of file, and read() would return 0 – no data to read. • Thus if the device needs to be shared amongst several process – Mutual Exclusion is required in a driver code. • Vector variants of read and write : readv and writev • struct iovec structures are created by the application, but the kernel copies them into kernel space before calling the driver. • Hence, struct iovec * is not a user space pointer in this case.
  • 41. The Big picture behind the scenes • Appl wants to perform read/write operation with the new Hardware in town, say printer. • Corresponding to that printer, in /dev dir, present is the device File (CDF) created using mknod cmd (or otherwise.) • Application in the user space operate on CDF as if it is a plane text file by calling usual system calls read/write/open/close function. • These system calls are a wrapper to corresponding system call Written in a corresponding device driver linux module. • Our user space application is innocent, it don’t know the Hardware type it is communicating to (i.e.) it do not know how to read/write the data from/to the hardware. • All Our application knows is to only write/read Opns using Read/write system calls. • The Device driver sits in the kernel space which understands the how to manipulate the printer specific I/O. Driver overrides the generic user space read/write/open/close file operations to device specific operations. • Each device file has a major number which identifies the Device driver associated with the device file which perform I/O On behalf of user-space application. Application -> file sys call(read/write etc) on device file -> sys call maps to corresponding fn handler in device driver Kernel module -> perform actual file I/O on device.
  • 43. Kernel API to print device major and minor numbers • Occasionally, when printing a message from a driver, you will want to print the device number associated with the hardware of interest • the kernel provides a couple of utility macros (defined in <linux/kdev_t.h>) for this purpose : A. int print_dev_t(char *buffer, dev_t dev); B. char *format_dev_t(char *buffer, dev_t dev); • Both macros encode the device number into the given buffer Kernel API to print the user space process info in whose context the driver/kernel code is executing #include <linux/sched.h> struct task_struct *current_task = get_current(); printk(KERN_INFO "The process is "%s" (pid %i)n", current_task->comm, current_task->pid); To ensure that certain module is successfully loaded in kernel and status of it [root@localhost proc]# cat /proc/modules myscull 3511 0 - Live 0x0b852000 (O)
  • 44. Debugging using /proc file system • The /proc filesystem is a special, software-created filesystem • It is used to transfer information between userspace and kernel(including drivers) • /proc file system is usually used to transfer information residing in kernel space to user space, but sometimes can be used to transfer information the other dirn as well. • /proc is just a directory structure with dir and files arranged in hierarchical fashion. • Each file under /proc is tied to a kernel function that generates the file’s “contents” on the fly when the file is read • /proc/modules, for example, always returns a list of the currently loaded modules. • Many utilities on a modern Linux distribution, such as ps, top, and uptime, get their information from /proc • Device drivers also export information via /proc • The /proc filesystem is dynamic, so your module can add or remove entries at any time • Entries in /proc can be written to as well as read from, Most of the time, however, /proc entries are readonly files • The /proc filesystem is seen by the kernel developers as a bit of an uncontrolled mess that has gone far beyond its original purpose (which was to provide information about the processes running in the system) • Therefore, adding files under /proc is discouraged • The recommended way of making information available in new code is via sysfs. • files under /proc are slightly easier to create, and they are entirely suitable for debugging purposes • its use is discouraged nowadays. • More on this later ….
  • 46. Introduction • The management of concurrency is, however, one of the core problems in operating systems programming. • Concurrency-related bugs are some of the easiest to create and some of the hardest to find • One of the main source of Concurrency – servicing multiple hardware interrupts concurrently • Device driver programmers must now factor concurrency into their designs from the beginning, and they must have a strong understanding of the facilities provided by the kernel for concurrency management. • Consider the following scenario: • There is a hardware device memory indicated by device file in /dev, say myscull0 • Lets say, process p1 and p2 open the device file and trigger write() operation on it. Both process write some data. • Depending on context switching of driver code (Kernel code is preemptible), memory write in driver may happen for process p1 first then p2 , or the other way • In both the cases, the data of one of the process which wrote first would going to be lost. • This is an example of Race condition resulting in inconsistency – including crash, panic or memory leak etc • Race conditions are a result of uncontrolled access to shared data • Device driver code should be re-entrant, that should be written assuming there can be multiple user space process attempting to access the same device at the same time. • Hence, time to learn Concurrency control principles in kernel ecosystem
  • 47. Driver Concurrent programming principles and guidelines • Avoid having shared resources in the first place. • Avoid Global Variables • But, Hardware resources are, by their nature, shared. Sharing is a fact of life. • The usual technique for access management is called locking or mutual exclusion—making sure that only one thread of execution can manipulate a shared resource at any time
  • 48. Mutual Exclusion - Semaphore • Mutual Exclusion is achieved using Semaphores and Mutexes • Linux implementation of Semaphores • must include <linux/semaphore.h> • type is struct semaphore • Static Semaphore initialization macros and fns • void sema_init(struct semaphore *sem, int val); where val is the initial value to assign to a semaphore. • Short hand macros • DECLARE_MUTEX(name); Same as : struct semaphore sem; sema_init(&sem, 1); • DECLARE_MUTEX_LOCKED(name); Same as : struct semaphore sem; sema_init(&sem, 0); • Dynamic Or runtime semaphore initialization void init_MUTEX(struct semaphore *sem); // initialized with initial val = 1 void init_MUTEX_LOCKED(struct semaphore *sem); // initialized with initial val = 0
  • 49. Mutual Exclusion – P (down) • down() decrements the value of the semaphore and if < 0, blocks the execution flow • There are three versions of down: • void down(struct semaphore *sem); >> non interruptible , may lead to non killable processes • int down_interruptible(struct semaphore *sem); >> Interrupting by user. if the operation is interrupted, the function returns a nonzero value, and the caller does not hold the semaphore. The fn return zero on success, and non zero value if the operation was interrupted. Hence a check on return value should be placed • int down_trylock(struct semaphore *sem); >> Non Blocking • Once a thread has successfully called one of the versions of down, it is said to be “holding” Or “acquired” the semaphore • When the operations requiring mutual exclusion are complete, the semaphore must be returned. The Linux equivalent to V is up (Signal). Next slide.
  • 50. Mutual Exclusion – V (up) • Once up has been called, the caller no longer holds the semaphore. • up increments the value of the semaphore • any thread that takes out a semaphore is required to release it with one (and only one) call to up • Fn prototype void up(struct semaphore *sem); Comparing with user space locking primitives For sem initialzed with 1 : down() = pthread_mutex_lock() up() = pthread_mutex_unlock() The major difference is that, pthread_mutex_unlock() should be called by the thread which called pthread_mutex_lock(). However, up() can be called by another kernel thread and kernel thread which called down() and got blocked, would resume. Hence, up() also behaves like signal.
  • 51. Reader/Writer Semaphore (rarely used) • struct semaphore provides mutual exclusion to critical section irrespective the nature of operation (read or write) the threads intend to perform on a shared resource • This locking mechanism is highly undesirable in a situation where all resource competing threads intends to perform read operation only. • This gives rise to – reader writer problem • The Linux kernel provides a special type of semaphore called a rwsem (or “reader/writer semaphore”) for this situation defined in <linux/rwsem.h> • struct rw_semaphore • Initialized at runtime using : void init_rwsem(struct rw_semaphore *sem); Read lock void down_read(struct rw_semaphore *sem); up_read(struct rw_semaphore *sem) void down_write_trylock(struct rw_semaphore *sem); returns non zero on success, else non-zero Write lock void down_write(struct rw_semaphore *sem); up_write(struct rw_semaphore *sem) void down_write_trylock(struct rw_semaphore *sem); returns non zero on success, else non-zero
  • 52. Reader/Writer Semaphore (rarely used) • Note that down_read may put the calling process into an uninterruptible sleep • down_read_trylock will not wait if read access is unavailable; it returns nonzero if access was granted, 0 otherwise. • If you have a situation where a writer lock is needed for a quickchange, followed by a longer period of readonly access, you can use downgrade_write to allow other readers in once you have finished making changes. void downgrade_write(struct rw_semaphore *sem); up_write(struct rw_semaphore *sem); • A rwsem allows either one writer or an unlimited number of readers to hold the semaphore. • Writers get priority; as soon as a writer tries to enter the critical section, no readers will be allowed in until all writers have completed their work. • This implementation can lead to reader starvation—where readers are denied access for a long time—if you have a large number of writers contending for the semaphore. • For this reason, rwsems are best used when write access is required only rarely, and writer access is held for short periods of time.
  • 53. Completions (introduced in kernel 2.4.7) Performance degradation with semaphores in situations where caller thread has to wait for external activity to complete A common pattern in kernel programming involves initiating some activity outside of the current thread, then waiting for that activity to complete. This activity can be the creation of a new kernel thread or user-space process, a request to an existing process, or some sort of hardware-based action. It such cases, it can be tempting to use a semaphore for synchronization of the two tasks, with code such as: struct semaphore sem; init_MUTEX_LOCKED(&sem); // initialse the semaphore with val 0 start_external_task(&sem); // start some external activity in new kernel thread down(&sem) // make the current thread wait The external task can then call up(&sem) when its work is done. This resume the caller thread. • if there is significant contention for the semaphore by the external thread, performance suffers • When used to communicate task completion in the way shown above, however, the thread calling down will almost always have to wait
  • 54. Completions (contd …) • As a solution to the problem we discussed, the mechanism of “Completions” was introduced in linux kernel Completions are a lightweight mechanism with one task: allowing one thread to tell another that the job is done • To use completions, your code must include <linux/completion.h> • URL : https://www.kernel.org/doc/Documentation/scheduler/completion.txt • If you have one or more threads of execution that must wait for some process to have reached a point or a specific state, completions can provide a race-free solution to this problem. • Semantically they are somewhat like a pthread_barrier and have similar use-cases. • Code is found in kernel/sched/completion.c
  • 55. Completions (contd …) • Declaring a Completion statically DECLARE_COMPLETION(my_completion); dynamically struct completion my_completion; init_completion(&my_completion); • Waiting for the completion is a simple matter of calling: void wait_for_completion(struct completion *c); Note that above function performs an uninterruptible wait. If your code calls wait_for_completion and nobody ever completes the task, the result will be an unkillable process !! • On the other side, the actual completion event may be signalled by calling one of the following: void complete(struct completion *c); void complete_all(struct completion *c);
  • 56. Completions (contd …) • The complete() calls behave differently if more than one thread is waiting for the same completion event • complete() : wakes up only one of the waiting threads • complete_all() : allows all of the waiting threads on wait_for_completion() to proceed. In most cases, there is only one waiter, and the two functions will produce an identical result Completion structure is a one shot tool • Re-initialize the structure before using it again using macro : INIT_COMPLETION(struct completion c); Q. How using completion as a technique to wait for the event completion is different from the one implemented using semaphore ?
  • 57. Spin locks • Another locking mechanism in kernel is the spin locks • Spin locks are used for deploying mutual exclusion mechanism in a code that MUST not sleep (as a result of calling blocking call which results in context switching – wait_for_completion() OR down()) • Unlike Semaphores, Spin locks avoids context switching, hence performance much better than semaphores in routines which are not suppose to sleep such as interrupt handlers. • When a kernel thread tries to lock the already locked semaphore(binary semaphore), it go to sleep, context switching happens. • When a kernel thread tries to lock the already locked spin-lock, it goes into tight loop testing (“test”) the state of the lock in a loop until it becomes available (“set”) , no context switching happens • The “test and set” operation must be done in an atomic manner so that only one thread can obtain the lock, even if several are spinning at any given time • spinlocks are, by their nature, intended for use on multiprocessor systems • Next we study Spin lock API
  • 58. Spin lock kernel API • To be included : <linux/spinlock.h> • Data structure type : spinlock_t • Initialization compile time spinlock_t my_lock = SPIN_LOCK_UNLOCKED; run time void spin_lock_init(spinlock_t *lock); • Grab the lock void spin_lock(spinlock_t *lock); Note that all spinlock waits are, by their nature, uninterruptible. Once you call spin_lock, you will spin until the lock becomes available. • Release the lock void spin_unlock(spinlock_t *lock);
  • 59. Rules for using spin locks to avail high performance • the core rule that applies to spinlocks is that : any code must, while holding a spinlock, be atomic, i.e. It cannot sleep; in fact, it cannot relinquish the processor for any reason except to service interrupts (and sometimes not even then). Reason : if your kernel thread holding a spinlock is put to sleep, the new thread given cpu may happen to ask for the same lock, since it is already locked by the thread (whch is sleeping now), the new thread will spin until the sleeping thread is loaded back and release the lock – a performance hit • Hence, your kernel code should not call any fn which triggers context switching • But, what if natural kernel preemption kicks in ? Any time kernel code holds a spinlock, natural kernel preemption is disabled on the local processor • Writing code that will execute under a spinlock requires paying attention to every function that you call. Avoid using fn that triggers processor preemption. Examples of such fns could be : • Copying data to or from user space : the required user-space page may need to be swapped in from the disk before the copy can proceed • kmalloc
  • 60. Rules for using spin locks to avail high performance • But, What will happen if the current thread holding a spinlock is put to sleep because of interrupt raised and interrupt handler need to access the same critical section ? Two scenarios arise here : 1. If the interrupt service routine (ISR) is assigned another processor, say, p1 • ISR is loaded on processor p1 • ISR waits to acquire the spin lock from current thread t1 (running on processor p2) • t1 runs to completion and release spinlock • ISR grabs the lock and runs atomically and release the spin lock when done 2. if the interrupt service routing is assigned the same processor on which the kernel thread holding a spin lock is currently executing • t1 is put to sleep immediately • ISR is loaded on processor • ISR loops over spin lock which was held by t1 which is not sleeping • Deadlock !! Conclusion : Hardware interrupts needs to be disabled on the processor on which the spin lock has been held by any kernel thread running if scenario 2 is likely to take place
  • 61. Spin Locks Summary When spin locks are used • Natural kernel context switching is disabled on the current processor • Interrupt lines are disabled on current processor • Programmer MUST not call sleep triggering fns in his code • spinlocks must always be held for the minimum time possible. • spinlocks are designed only to be used in code which should not sleep – interrupt handlers • spinlocks are, by their nature, intended for use on multiprocessor systems • Kernel Preemption can occur when • When returning to kernel-space from an interrupt handler • When kernel code becomes pre-emptible again • If a task in the kernel explicitly call a schedule • If a task in the kernel blocks
  • 62. Spin Locking APIs Interrupt handlers Software IH Hardware IH (generated by soft irq) (generated by hard irq) • As we have seen earlier, not disabling interrupts while using spinlocks in interrupt handlers can lead to Deadlocks • Depending on the situation whether any software IH or Hardware IH could potentially access the shared resource, you should use the appropriate spin_lock() calls for disabling the interrupts. spin_lock()/spin_unlock() Disable all forms of interrupts on a local processor spin_lock_irq()/spin_unlock_irq() Disable only hardware based interrupts on a local processor spin_lock_bh()/spin_unlock_bh() Disable only software based interrupts in a local processor void spin_lock_irqsave()/void spin_unlock_irqrestore disables interrupts (on the local processor only) before taking the spinlock; the previous interrupt state is stored in flags
  • 64. Do’s and Dont’s while using locks Locking twice the same resource • Define the fns which lock the resource in a header file for others to use. Implicit internal fns which assumes the resource has been locked by the caller, either should be internal static fns, or well documented. • The fn which has locked the resource should not call other fn which in turn attempt to lock the resource again – Deadlock ! Lock Ordering Rules • Can lead to deadlock again. • Eg : lock(r1) lock(r2) lock(r2) lock(r1) Soln : Follow the same sequence of locking the multiple resources Try to avoid code acquiring multiple locks in the first place. Combination of Semaphores and Spinlocks • In situations where you need to use both , the semaphore and spinlock at the same time, Always use semaphore first before spin_lock. • The other way will lead to deadlock, and is a serious error
  • 65. Do’s and Dont’s while using locks Fine- Versus Coarse-Grained Locking • you should start with relatively coarse locking unless you have a real reason to believe that contention could be a problem • Avoid very coarse grained locks as well. • If you do suspect that lock contention is hurting performance, you may find the lockmeter tool useful. This patch (available at http://oss.sgi.com/projects/lockmeter/) instruments the kernel to measure time spent waiting in locks. • By looking at the report, you are able to determine quickly whether lock contention is truly the problem or not. Lock-Free Algorithms • If possible, try to design your data structure as a lock free data structure • Eg : circular buffers, available at <linux/kfifo.h> Atomic Variables • Try use Atomic variables and related kernel API to manipulate them, if a shared resource is as simple as an integer variable. • the kernel provides an atomic integer type called atomic_t, defined in <asm/atomic.h>. • It cannot hold integer value larger than represented by 24 bits • Refer pg 125, LDD3, for API.
  • 66. Do’s and Dont’s while using locks Atomic Bit Operations • The atomic_t type is good for performing integer arithmetic. It doesn’t work as well, however, when you need to manipulate individual bits in an atomic manner. • the kernel offers a set of functions that modify or test single bits atomically • Atomic bit operations are very fast, since they perform the operation using a single machine instruction without disabling interrupts (probably it is true for operations on atomic_t variable as well) • Bit operations are defined in <asm/bitops.h> • API : pg 127 , LDD3 NOTE : For the multiple threads of the same process in kernel space, there is only one and only one struct task_struct process instance in kernel space.
  • 67. Seq Locks - lockless access to a shared resource • Skipping …
  • 70. IOCTL (Input Output Control) • We start with implementing the ioctl system call, which is a common interface used for device control • ioctl system call controls and perform various other operations on a device • ioctl is often the easiest and most straightforward choice for true device operations • To be included - <linux/ioctl.h> • In user space, the ioctl system call has the following prototype: int ioctl(int fd, unsigned long cmd, ...); • Note that, the third argument is not a variable argument, but one single optional argument whose type checking is ignored at the compile time • The actual nature of the third argument depends on the specific control command being issued (the second argument) • Some commands take no arguments, some take an integer value, and some take a pointer to other data. Using a pointer is the way to pass arbitrary data to the ioctl call; the device is then able to exchange any amount of data with user space. • the value of the ioctl cmd argument is not currently used by the kernel, and it’s quite unlikely it will be in the future
  • 71. IOCTL (Input Output Control) • Ioctl() prototype in driver side int (*ioctl) (struct inode *inode, struct file *filp, unsigned int cmd, unsigned long arg); • Let us brief about the arguments passed to above ioctl callI: • The inode and filp pointers are the values corresponding to the file descriptor fd passed on by the application and are the same parameters passed to the open method • The cmd argument is passed from the user unchanged, it should be unique SYSTEM-WIDE • the optional arg argument is passed in the form of an unsigned long, regardless of whether it was given by the user as an integer or a pointer. • Because compile time type checking is disabled on the extra argument arg, the compiler can’t warn you if an invalid argument is passed to ioctl, and any associated bug would be difficult to spot Most ioctl implementations consist of a big switch statement that selects the correct behavior according to the cmd argument
  • 72. IOCTL (Input Output Control) • As stated, the middle argument of user space ioctl fn: int cmd argument should be unique system wide • There are some linux kernel convention defined to choose a unique cmd arg value int cmd • The 32 bit cmd is splitted into smaller bit groups, each have specific meaning as shown above. • The majic number is a unique 8-bit number, which is specific to a device. Every hardware device in the system which is being operated by the driver has a unique majic number. You should choose the majic number which has not been taken already • Taken/reserved majic numbers are defined in Documentation/ioctl-number.txt. • Since 14 bits are available for specifying the size of user data to be passed to driver, using single ioctl call, more than 2^14 bytes of data cannot be passed to the driver Eg : _IOW(SCULL_IOC_MAGIC, 2, int) – gives the system wide unique cmd no Majic no = SCULL_IOC_MAGIC, Cmd seq no = 2 , Data transfer dirn = WRITE (bcoz write macro is used), size of data = sizeof(int) 8 bit majic number 8 bit sequence no. 2 bits data transfer dirn 14 bits : size of data specified in 3rd argument
  • 73. IOCTL (Input Output Control) • Given a 32 bit number, which represents the cmd , the 2nd arg to ioctl user space call, In driver we can extract each component from the cmd using following macros • _IOC_TYPE(cmd) -- returns majic no • _IOC_NR(cmd) -- returns operation sequence no • _IOC_DIR(cmd) -- returns dir, which is _IOC_READ , means, user program wants to read the device memory -- or _IOC_WRITE, mean, user program wants to write into the device memory
  • 74. Blocking IO • How does a driver respond to (ioctl or read operation) if it cannot immediately satisfy the request? • A call to read may come when no data is available, but more is expected in the future • Or a process could attempt to write, but your device is not ready to accept the data, because your output buffer is full. • your driver should (by default) block the process, putting it to sleep until the request can proceed • There are, however, a couple of rules that you must keep in mind to be able to code sleeps in a safe manner. Rule 1 : Never sleep when you are running in an atomic context. • An atomic context is simply a state where multiple steps must be performed without any sort of concurrent access. • our driver should not sleep while holding a spinlock, seqlock, or RCU lock. • You also cannot sleep if you have disabled interrupts. • It is legal to sleep while holding a semaphore though, but such code should be written very carefully. • any process that sleeps must check to be sure that the condition it was waiting for is really true when it wakes up again Rule 2 : you can make no assumptions about the state of the system after the thread wakes up, and it must check to ensure that the condition you were waiting for is, indeed, true.
  • 75. Blocking IO • Rule 3 : process cannot sleep unless it is assured that somebody else, somewhere, will wake it up To accomplish the goals of controlled sleeping of process, we use wait queue. A wait queue is just what it sounds like: a list of processes, all waiting for a specific event. wait queue • Data type : wait_queue_head_t • Define in <linux/wait.h> • Static Initialization DECLARE_WAIT_QUEUE_HEAD(name); • Dynamic initialization wait_queue_head_t my_queue; init_waitqueue_head(&my_queue);
  • 76. Blocking IO – sleep and wake up • Sleep A macro called wait_event is used to put the process to sleep. The forms of wait_event are: • queue is the wait queue head to use. Notice that it is passed “by value” • The condition is an arbitrary boolean expression that is evaluated by the macro before and after sleeping; • until condition evaluates to a true value, the process continues to sleep. • Note that condition may be evaluated an arbitrary number of times, so it should not have any side effects. • If you use wait_event, your process is put into an uninterruptible sleep • The preferred alternative is wait_event_interruptible, which can be interrupted by signals. If it returns non zero value, meaning the process if awakened by signals, else return zero if condn is true. You should return –ERESTARTSYS error in former case. • The final versions (wait_event_timeout and wait_event_interruptible_timeout) wait for a limited time; after that time period expires, the macros return with a value of 0 regardless of how condition evaluates • The condn should be tested automically
  • 77. Blocking IO – sleep and wake up • wake • Some other thread of execution (a different process, or an interrupt handler, perhaps) has to perform the wakeup for you, since your process is, of course, asleep. The basic function that wakes up sleeping processes is called wake_up A macro called wake_event is used to put the process to sleep. The forms of wake_event are: • wake_up wakes up all processes waiting on the given queue
  • 78. Blocking & Non Blocking IO • Explicitly nonblocking I/O is indicated by the O_NONBLOCK flag in filp->f_flags. • The flag is defined in <linux/fcntl.h>, which is automatically included by <linux/fs.h> • The flag is cleared by default, because the normal behavior of a process waiting for data is just to sleep Behavior of blocking operation read • If a process calls read but no data is (yet) available, the process must block. The process is awakened as soon as some data arrives, and that data is returned to the caller, even if there is less than the amount requested in the count argument to the method write • If a process calls write and there is no space in the buffer, the process must block. The process is awakened and the write call succeeds, although the data may be only partially written if there isn’t room in the buffer for the count bytes that were requested • Both these statements assume that there are both input and output buffers; in practice, almost every device driver has them. • The input buffer is required to avoid losing data that arrives when nobody is reading • In contrast, data can’t be lost on write, because if the system call doesn’t accept data bytes, they remain in the user-space buffer. Even so, the output buffer is almost always useful for squeezing more performance out of the hardware.
  • 79. Blocking & Non Blocking IO Behavior of Non-blocking operation • The behavior of read and write is different if O_NONBLOCK is specified. In this case, the calls simply return -EAGAIN (“try it again”) if a process calls read when no data is available or • if it calls write when there’s no space in the buffer. • Non-blocking operations return immediately, allowing the application to poll for data • Applications can easily mistake a non-blocking return for EOF as “data not available”. • Thus, They always have to check errno. • Only the read, write, and open file operations are affected by the non-blocking flag.
  • 80. Non Blocking IO Existing Non-blocking system calls : poll, select, and epoll • This support(for all three calls) is provided through the driver’s poll method. This method has the following prototype: unsigned int (*poll) (struct file *filp, poll_table *wait); // in kerne 2.6.x unsigned int (*poll) (struct file *, struct poll_table_struct *); // in kernel 4.1.x • The driver’s poll method is called whenever the user-space program performs a poll, select, or epoll system call • In select specify the set of file descriptors. Select() system all iterates over all FD on the fd_set and call the driver’s poll() function for each device represented by fd. • In Poll() method, we need to call the poll_wait(), implements the actual polling functionality. void poll_wait(struct file * filp, wait_queue_head_t * wait_address, poll_table *p) • poll_wait() adds the current process to the waitQ (2nd argument), and add the waitQ to the poll table. There is one poll table per process and same is passed as the 3rd arg to poll(). • Then poll() returns, all entries in poll table is flushed and repopulated whe poll_wait() is called again • Poll_table() is something to be used internally by the kernel
  • 81. Non Blocking IO – select()/poll() • The poll() fn performs its task in two phases : • Query phase • Processing phase • Query phase : • When select system call is called in user space, driver’s poll() fn is triggered for each device whose FDs are specified in fd_set passed to select() • If Data Is Not available • If O_NONBLOCK is set • Return immediately –EAGAIN from poll() • If O_NONBLOCK is not set • Poll_wait() and return 0 -- returning zero cause kernel to block on poll() • If Data Is Available • If O_NONBLOCK is set • return POLLIN|POLLRDNORM -- this sets the FD in user space and unblocks the select system call • If O_NONBLOCK is not set • return POLLIN|POLLRDNORM -- this sets the FD in user space and unblocks the select system call • As you see, the actual data read do not happen in the query phase of polling, driver just tells the user space about the status of the data whether ready to be read or written
  • 82. Non Blocking IO – select()/poll() • The poll() fn performs its task in two phases : • Query phase • Processing phase • Processing phase: • The actual data read or written is done in this phase. • When this phase completes , select() in user space completes one complete poll and again made to block on select() call For next poll iteration. Summarizing the sequence of control flow for a typical blocking, and data not available scenario Select() ---- > poll() ---- > poll_wait() ---- > return 0 (block) ---- > wakeup() ---- > poll() ---- > return POLLIN|POLLRDNORM ---- > select() (unblocks) ---- > check fd_set ---- >perform OP (say, read) ----> select() (one iteration complete) The same and detailed flow is depicted in the following rough sketch is on next slide
  • 83.
  • 84. Different locking mechanism in Linux kernel • struct semaphore sem • Lock and unlock mechanism • struct rw_semaphore • Handle reader writer problem • struct completion • Go and complete this work and when you complete notify me, I am waiting until then • Spinlocks • Lock and unlock mechanism without sleep • Reader/writer spinlocks • Lock and unlock mechanism without sleep, handle reader writer problem • RCU locking • Seq locks • Wait queue • Implement blocking system calls
  • 87. struct task_struct structure (defined in <linux/sched.h>) • How to Manipulate the state of a process in kernel • struct task_struct *tsk • tsk->state; The above member can take the following values • #define TASK_RUNNING 0 // not necessarily running on CPU, but runnable • #define TASK_INTERRUPTIBLE 1 // denotes that process is sleeping • #define TASK_UNINTERRUPTIBLE 2 // denotes that process is sleeping • #define __TASK_STOPPED 4 • #define __TASK_TRACED 8 • tsk->exit_state The above member can take the following values • #define EXIT_DEAD 16 • #define EXIT_ZOMBIE 32 • #define EXIT_TRACE (EXIT_ZOMBIE | EXIT_DEAD) • Setting the current state of the process • void set_current_state(int new_state); • current->state = TASK_INTERRUPTIBLE; // changing the state directly is discouraged. • changing the current state of a process does not, by itself, put it to sleep or wakeup. By changing the current state, you have changed the way the scheduler treats a process, but you have not yet yielded the processor.
  • 88. struct task_struct structure (defined in <linux/sched.h>) • How to Manipulate the state of a process in kernel • struct task_struct *tsk • tsk->pid // give the process id • Tsk->comm // give the process name (name of binary)
  • 90. kmalloc • Thus far, we have used kmalloc and kfree for the allocation and freeing of memory • the kernel offers a unified memory management interface to the drivers. • Kmalloc • The function is fast (unless it blocks), its pre-emptible call • doesn’t clear the memory it obtains; the allocated region still holds its previous content • The allocated region is also contiguous in physical memory. #include <linux/slab.h> void *kmalloc(size_t size, int flags); • flags, 2nd argument – controls the behavior of kmalloc in no of ways • GFP_KERNEL : • means that the allocation is performed on behalf of a process running in kernel space. • Using GFP_KERNEL means that kmalloc can put the current process to sleep waiting for a page when called in low-memory situations. • A function that allocates memory using GFP_KERNEL must, therefore, be reentrant and cannot be running in atomic context. • GFP_KERNEL isn’t always the right allocation flag to use; sometimes kmalloc is called from outside a process’s context. For instance, in interrupt handlers, tasklets, and kernel timers. In this case, the current process should not be put to sleep, and the driver should use a flag of GFP_ATOMIC instead.
  • 91. Kmalloc flags • GFP_ATOMIC • The kernel normally tries to keep some free pages around in order to fulfill atomic allocation. • When GFP_ATOMIC is used, kmalloc can use even the last free page. If that last page does not exist, however, the allocation fails. • All flags defined in <linux/gfp.h> • GFP_USER • Used to allocate memory for user-space pages; it may sleep. • GFP_HIGHUSER • The allocation flags listed above can be augmented by an ORing in any of the following flags, which change how the allocation is carried out: • __GFP_DMA -- only the DMA capable zone is search to allocate a page • __GFP_HIGHMEM -- all three zones (nxt slide) are used to search and allocate a free page. • __GFP_COLD • __GFP_NOWARN • __GFP_REPEAT • __GFP_NOFAIL • __GFP_NORETRY
  • 92. Memory Zones • The Linux kernel knows about a minimum of three memory zones: • DMA-capable memory, • It is a memory that lives in a preferential address range, where peripherals (hardwares) can perform DMA access • On the x86, the DMA zone is used for the first 16 MB of RAM • Normal memory, and • If no special flag (starting with double score) is present both normal and DMA memory are searched for allocation • High memory • High memory is a mechanism used to allow access to (relatively) large amounts of memory on 32-bit platforms • This memory cannot be directly accessed from the kernel without first setting up a special mapping and is generally harder to work with The mechanism behind memory zones is implemented in mm/page_alloc.c
  • 93. Kmalloc agument size • The kernel manages the system’s physical memory, which is available only in page sized chunks. • As a result, kmalloc looks rather different from a typical user-space malloc implementation • Kernel memory doesn’t have ownership to any particular process – it is for all. So no Question of virtual address based memory management in kernel. • the kernel uses a special page-oriented allocation technique, and not the one used in user space, to get the best use from the system’s RAM. • Linux handles memory allocation by creating a set of pools of memory objects of fixed sizes. Allocation requests are handled by going to a pool that can holds sufficiently large objects and handing an entire memory chunk back to the requester. • kernel can allocate only certain predefined, fixed-size byte arrays (how much ?). If you ask for an arbitrary amount of memory, you’re likely to get slightly more than you asked for, up to twice as much. • that the smallest allocation that kmalloc can handle is as big as 32 or 64 bytes, depending on the page size used by the system’s architecture. • There is an upper limit to the size of memory chunks that can be allocated by kmalloc. • If your code is to be completely portable, it cannot count on being able to allocate anything larger than 128 KB. • If you need more than a few kilobytes, however, there are better ways than kmalloc to obtain memory
  • 94. Fragmentation First let us understand the problem arises of memory allocation : Fragmentation • External fragmentation • Internal fragmentation you tube link
  • 95. Fragmentation • The concept of paging resolves the problem of fragmentation – external and internal at process level. • With in a process, while allocating memory objects (malloc/calloc) and freeing them at run time(free), process’s virtual address space suffers from external and internal fragmentation. • Slab allocation scheme attempts to solve the problem of fragmentation by categorizing uniform size objects together • The algorithms – Best fit, Worst fit and first fit do not solve the problem of fragmentation, but mitigate it to some extent
  • 96. Kernel look aside caches (also called slab allocator) • It’s a technique to get rid of fragmentation in kernel memory. • It is employed and beneficial to use in drivers or kernel programming when the objects of samesize needs to be created or destroyed frequently by the kernel module. • Look aside caches is the pool of memory of objects of same size. • Data type of look aside caches : kmem_cache_t • API to create look aside cache • This API creates a new cache object/look aside cache that can host any number of memory areas all of the same size, specified by the size argument
  • 97. Kernel look aside caches (also called slab allocator) Arguments : name - character name of the lookaside cache size - size of each object hosted by the cache offset - most likely 0, it is the offset of the first object in the page flags - control how allocation should be done • SLAB_NO_REAP - - Setting this flag protects the cache from being reduced when the system is looking for memory. • SLAB_HWCACHE_ALIGN - - This flag requires each data object to be aligned to a cache line • SLAB_CACHE_DMA -- This flag requires each data object to be allocated in the DMA memory zone. Constructor - - used to initialize the data object in look aside cache Destructor - - used to de-initialize the data object in look aside cache You cannot assume that the constructor will be called as an immediate effect of allocating an object. Similarly, destructors can be called at some unknown future time, not immediately after an object has been freed
  • 98. Kernel look aside caches (also called slab allocator) Constructor and destructor • To make the const or dest atomic, pass the flag SLAB_CTOR_ATOMIC as the 3rg arg to the constructor/dest. • For convenience, a programmer can use the same function for both the constructor and destructor; the slab allocator always passes the SLAB_CTOR_CONSTRUCTOR flag when the callee is a constructor. API to actually create an object hosted in look aside cache region • Once the look aside cache has been created, following API is used to create objects of fixed size void *kmem_cache_alloc(kmem_cache_t *cache, int flags); Arguments : • cache – pointer to the slab cache/look aside cache • flags - same as you would have passed to kmalloc
  • 99. Kernel look aside caches (also called slab allocator) API to free an object hosted in look aside cache region void kmem_cache_free(kmem_cache_t *cache, const void *obj); Arguments : • cache – pointer to the slab cache/look aside cache • obj - pointer to the cache object to be freed API to free the look aside cache (usually when driver is unloaded) void kmem_cache_destroy(kmem_cache_t *) • The destroy operation succeeds only if all objects allocated from the cache have been returned to it. • A module should check the return status from kmem_cache_destroy; a failure indicates some sort of memory leak within the module • kernel maintains statistics on cache usage. These statistics may be obtained from /proc/slabinfo. • For internal design of the slab allocator, pls read the paper by Jeff Bonwick’s
  • 100. mempools • The slab allocator improves the performance by not repeatedly creating and destroying the objects. • Improves memory usage efficiency by reducing internal and external fragmentation • Eradicates most of the need to interact with low level Virtual memory API and memory hardware. • However, it doesn’t guarantee whether the new memory (slab) allocation Or de-allocation(slab) will be successful or not depending on the stress on kernel VM. • There are scenarios, when the driver, when requesting a memory from VM, needs assurance that request MUST complete whatever be the case. • To meet such requirement of assurance, mempools comes into picture. • A memory pool is really just a form of a lookaside cache that tries to always keep a list of free memory around for use in emergencies. • mempools allocate a chunk of memory that sits in a list, idle and unavailable for any real use. Hence, It is easy to consume a great deal of memory with mempools • When mempools are used, Is is also often desired to harness the advantage of slab allocators on top of memory pools. • Let us discuss the mempools API provided by linux kernel
  • 101. Mempools API • A memory pool has a type of mempool_t (defined in <linux/mempool.h>); • Creation of new pool Where, alloc_fn and free_fn are user defined fn used for alloc and free the objects min_nr – the minimum no of objects the pool must create when pool is created. pool_data – is the pointer to the data to be used by alloc_fn and free_fn for creating and destroying objects. It is passed as an argument to alloc_fn and free_fn alloc_fn and free_fn prototype:
  • 102. Mempools API • Once the pool is created, Objects can be created and destroyed by following APIs: As, the above two fns take mempool as an argument, they call the registered alloc/free fn with mempool to alloc and free the objects. Hence, mempools are pools of memory of specific object types, because the above two specifies do not permit the programmer to specify the object layout to be created. mempools are used in conjunction with slab allocators. Resizing and destroying the mempools int mempool_resize(mempool_t *pool, int new_min_nr, int gfp_mask); void mempool_destroy(mempool_t *pool); -- All objects should be freed first
  • 103. Get entire memory pages • If a module needs to allocate big chunks of memory, it is usually better to use a page oriented technique. • Requesting whole pages also has other advantages To allocate pages, the following functions are available: The flags argument works in the same way as with kmalloc; usually either GFP_KERNEL or GFP_ATOMIC. order is the base-two logarithm of the number of pages you are requesting or freeing (i.e., log2N). For example, order is 0 if you want one page and 3 if you request eight pages. If order is too big (no contiguous area of that size is available), the page allocation fails. The maximum allowed value for order is 10 or 11 If you are curious, /proc/buddyinfo tells you how many blocks of each order are available for each memory zone on the system
  • 104. Freeing memory pages • When a program is done with the pages, it can free them with one of the following macro/functions : • If you try to free a different number of pages from what you allocated, the memory map becomes corrupted, and the system gets in trouble at a later time • Working with pages removes the internal fragmentation altogether. • The main advantage of page-level allocation isn’t actually speed, but rather more efficient memory usage. Allocating by pages wastes no memory, whereas using kmalloc wastes an unpredictable amount of memory because of allocation granularity. • But the biggest advantage of the __get_free_page functions is that the pages obtained are completely yours, and you could, in theory, assemble the pages into a linear area by appropriate tweaking of the page tables. More on this later. • It’s worth stressing that memory addresses returned by kmalloc and _get_free_pages are also virtual addresses.
  • 105. Freeing memory pages Alternate APIs to work with page oriented memory allocations and deallocations • Last two functions are wrapper over first API. • Flags are usual GFP_ flags. • Return value is the ptr to the first bytes of the first page allotted. • Nid – NUMA id To release pages allocated in this manner, you should use one of the following: If you have specific knowledge of whether a single page’s contents are likely to be resident in the processor cache, you should communicate that to the kernel with free_hot_page (for cache-resident pages) or free_cold_page. This information helps the memory allocator optimize its use of memory across the system
  • 108. So how does an I/O device 'interrupt' the CPU? • I/O device asserts (place the electrical signals on) an interrupt request line • An interrupt request line consists of special circuitry added to the motherboard of the computer system, that lets an I/O device send a signal directly to the CPU • when an interrupt enabled I/O device is ready to receive or transfer data, it asserts an interrupt request line, thus issuing to the CPU an interrupt request (IRQ) • When the CPU detects an interrupt request, it temporarily suspends it's work to service the request • Sources : • http://support.tenasys.com/INtimeHelp_5/ovw_interrupt.html • http://home.agh.edu.pl/~kozlak/PS2010/interrupts2.html • http://www.cs.mcgill.ca/~cs573/fall2002/notes/lec273/lecture20/20_2.htm (imp) • https://www.youtube.com/watch?v=cxkq8jIk7y0
  • 109. Interrupt Handling • There are two types of interaction between the CPU and the rest of the computer's hardware : • When CPU gives orders to hardware • When hardware asks CPU to respond - these are called interrupts Hardware devices typically have a very small amount of RAM, and if you don't read their information when available, it is lost. Hence Interrupts should be services asap. Under Linux, hardware interrupts are called IRQ's (InterruptRequests) IRQs are of two types : • short • A short IRQ is one which is expected to take a very short period of time of processing, during which the rest of the machine will be blocked and no other interrupts will be handled • All interrupts are disabled on a current processor servicing a short interrupt. • Short interrupt handler MUST not sleep., i.e. the process on behalf of which the interrupt it executing should be in TASK_RUNNING state, or else, system freeze can occur. Therefore, natural context switching is also disabled on that processor. • Must finish its tasks asap by either finishing it Or deferring it to bottom half routines. • Therefore, interrupt handlers cannot perform any blocking procedure • long • A long IRQ is one which can take longer, and during which other interrupts may occur (but not interrupts from the same device) • If at all possible, it's better to declare an interrupt handler to be long.
  • 110. Service flow of an interrupt • When the CPU receives an interrupt, • it stops whatever it's doing (unless it's processing a more important interrupt, in which case it will deal with this one only when the more important one is done) • saves certain parameters on the stack and calls the interrupt handler. This means that certain things are not allowed in the interrupt handler itself, because the system is in an unknown state. • The solution to this problem is for the interrupt handler to do what needs to be done immediately, usually read something from the hardware or send something to the hardware, and then schedule the handling of the new information at a later time (this is called the "bottom half") and return. • The kernel is then guaranteed to call the bottom half as soon as possible −− and when it does, everything allowed in kernel modules will be allowed. Sharing the same interrupt line • For the same interrupt number, several devices can be registered. • Meaning same interrupt handler is invoked by multiple devices sharing the same IRQ number. • When interrupt handler corresponding the shared IRQ number is invoked, it executes the number of interrupt service routine which are device specific. • Thus, when interrupt handler is invoked on a shared IRQ line, we don’t know which device on that IRQ line has generated an interrupt. • The kernel must discover which I/O device corresponds to the IRQ number before enabling interrupts.
  • 111. Dynamic interrupt lines • Sometimes a peripheral is not assigned the dedicated IRQ line/no. • They are dynamically assigned the IRQ line when they needs attention or accessed. • Such an IRQ line/no is said to be dynamic. • When device is being serviced on the dynamic IRQ line, IRQ line cannot be used to service other device at the same time • IRQ line is assigned to service another device when the current device has been serviced and IRQ line is declared as available.