Merge branch 'akpm' (patches from Andrew)

Merge second patchbomb from Andrew Morton:

 - the rest of MM

 - various misc bits

 - add ability to run /sbin/reboot at reboot time

 - printk/vsprintf changes

 - fiddle with seq_printf() return value

* akpm: (114 commits)
  parisc: remove use of seq_printf return value
  lru_cache: remove use of seq_printf return value
  tracing: remove use of seq_printf return value
  cgroup: remove use of seq_printf return value
  proc: remove use of seq_printf return value
  s390: remove use of seq_printf return value
  cris fasttimer: remove use of seq_printf return value
  cris: remove use of seq_printf return value
  openrisc: remove use of seq_printf return value
  ARM: plat-pxa: remove use of seq_printf return value
  nios2: cpuinfo: remove use of seq_printf return value
  microblaze: mb: remove use of seq_printf return value
  ipc: remove use of seq_printf return value
  rtc: remove use of seq_printf return value
  power: wakeup: remove use of seq_printf return value
  x86: mtrr: if: remove use of seq_printf return value
  linux/bitmap.h: improve BITMAP_{LAST,FIRST}_WORD_MASK
  MAINTAINERS: CREDITS: remove Stefano Brivio from B43
  .mailmap: add Ricardo Ribalda
  CREDITS: add Ricardo Ribalda Delgado
  ...
This commit is contained in:
Linus Torvalds 2015-04-15 16:39:15 -07:00
commit eea3a00264
136 changed files with 3273 additions and 1808 deletions

View file

@ -100,6 +100,7 @@ Rajesh Shah <rajesh.shah@intel.com>
Ralf Baechle <ralf@linux-mips.org>
Ralf Wildenhues <Ralf.Wildenhues@gmx.de>
Rémi Denis-Courmont <rdenis@simphalempin.com>
Ricardo Ribalda Delgado <ricardo.ribalda@gmail.com>
Rudolf Marek <R.Marek@sh.cvut.cz>
Rui Saraiva <rmps@joel.ist.utl.pt>
Sachin P Sant <ssant@in.ibm.com>

17
CREDITS
View file

@ -508,6 +508,10 @@ E: paul@paulbristow.net
W: http://paulbristow.net/linux/idefloppy.html
D: Maintainer of IDE/ATAPI floppy driver
N: Stefano Brivio
E: stefano.brivio@polimi.it
D: Broadcom B43 driver
N: Dominik Brodowski
E: linux@brodo.de
W: http://www.brodo.de/
@ -3008,6 +3012,19 @@ W: http://www.qsl.net/dl1bke/
D: Generic Z8530 driver, AX.25 DAMA slave implementation
D: Several AX.25 hacks
N: Ricardo Ribalda Delgado
E: ricardo.ribalda@gmail.com
W: http://ribalda.com
D: PLX USB338x driver
D: PCA9634 driver
D: Option GTM671WFS
D: Fintek F81216A
D: Various kernel hacks
S: Qtechnology A/S
S: Valby Langgade 142
S: 2500 Valby
S: Denmark
N: Francois-Rene Rideau
E: fare@tunes.org
W: http://www.tunes.org/~fare

View file

@ -0,0 +1,119 @@
What: /sys/block/zram<id>/num_reads
Date: August 2015
Contact: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Description:
The num_reads file is read-only and specifies the number of
reads (failed or successful) done on this device.
Now accessible via zram<id>/stat node.
What: /sys/block/zram<id>/num_writes
Date: August 2015
Contact: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Description:
The num_writes file is read-only and specifies the number of
writes (failed or successful) done on this device.
Now accessible via zram<id>/stat node.
What: /sys/block/zram<id>/invalid_io
Date: August 2015
Contact: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Description:
The invalid_io file is read-only and specifies the number of
non-page-size-aligned I/O requests issued to this device.
Now accessible via zram<id>/io_stat node.
What: /sys/block/zram<id>/failed_reads
Date: August 2015
Contact: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Description:
The failed_reads file is read-only and specifies the number of
failed reads happened on this device.
Now accessible via zram<id>/io_stat node.
What: /sys/block/zram<id>/failed_writes
Date: August 2015
Contact: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Description:
The failed_writes file is read-only and specifies the number of
failed writes happened on this device.
Now accessible via zram<id>/io_stat node.
What: /sys/block/zram<id>/notify_free
Date: August 2015
Contact: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Description:
The notify_free file is read-only. Depending on device usage
scenario it may account a) the number of pages freed because
of swap slot free notifications or b) the number of pages freed
because of REQ_DISCARD requests sent by bio. The former ones
are sent to a swap block device when a swap slot is freed, which
implies that this disk is being used as a swap disk. The latter
ones are sent by filesystem mounted with discard option,
whenever some data blocks are getting discarded.
Now accessible via zram<id>/io_stat node.
What: /sys/block/zram<id>/zero_pages
Date: August 2015
Contact: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Description:
The zero_pages file is read-only and specifies number of zero
filled pages written to this disk. No memory is allocated for
such pages.
Now accessible via zram<id>/mm_stat node.
What: /sys/block/zram<id>/orig_data_size
Date: August 2015
Contact: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Description:
The orig_data_size file is read-only and specifies uncompressed
size of data stored in this disk. This excludes zero-filled
pages (zero_pages) since no memory is allocated for them.
Unit: bytes
Now accessible via zram<id>/mm_stat node.
What: /sys/block/zram<id>/compr_data_size
Date: August 2015
Contact: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Description:
The compr_data_size file is read-only and specifies compressed
size of data stored in this disk. So, compression ratio can be
calculated using orig_data_size and this statistic.
Unit: bytes
Now accessible via zram<id>/mm_stat node.
What: /sys/block/zram<id>/mem_used_total
Date: August 2015
Contact: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Description:
The mem_used_total file is read-only and specifies the amount
of memory, including allocator fragmentation and metadata
overhead, allocated for this disk. So, allocator space
efficiency can be calculated using compr_data_size and this
statistic.
Unit: bytes
Now accessible via zram<id>/mm_stat node.
What: /sys/block/zram<id>/mem_used_max
Date: August 2015
Contact: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Description:
The mem_used_max file is read/write and specifies the amount
of maximum memory zram have consumed to store compressed data.
For resetting the value, you should write "0". Otherwise,
you could see -EINVAL.
Unit: bytes
Downgraded to write-only node: so it's possible to set new
value only; its current value is stored in zram<id>/mm_stat
node.
What: /sys/block/zram<id>/mem_limit
Date: August 2015
Contact: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Description:
The mem_limit file is read/write and specifies the maximum
amount of memory ZRAM can use to store the compressed data.
The limit could be changed in run time and "0" means disable
the limit. No limit is the initial state. Unit: bytes
Downgraded to write-only node: so it's possible to set new
value only; its current value is stored in zram<id>/mm_stat
node.

View file

@ -141,3 +141,28 @@ Description:
amount of memory ZRAM can use to store the compressed data. The
limit could be changed in run time and "0" means disable the
limit. No limit is the initial state. Unit: bytes
What: /sys/block/zram<id>/compact
Date: August 2015
Contact: Minchan Kim <minchan@kernel.org>
Description:
The compact file is write-only and trigger compaction for
allocator zrm uses. The allocator moves some objects so that
it could free fragment space.
What: /sys/block/zram<id>/io_stat
Date: August 2015
Contact: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Description:
The io_stat file is read-only and accumulates device's I/O
statistics not accounted by block layer. For example,
failed_reads, failed_writes, etc. File format is similar to
block layer statistics file format.
What: /sys/block/zram<id>/mm_stat
Date: August 2015
Contact: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Description:
The mm_stat file is read-only and represents device's mm
statistics (orig_data_size, compr_data_size, etc.) in a format
similar to block layer statistics file format.

View file

@ -98,20 +98,79 @@ size of the disk when not in use so a huge zram is wasteful.
mount /dev/zram1 /tmp
7) Stats:
Per-device statistics are exported as various nodes under
/sys/block/zram<id>/
disksize
num_reads
num_writes
failed_reads
failed_writes
invalid_io
notify_free
zero_pages
orig_data_size
compr_data_size
mem_used_total
mem_used_max
Per-device statistics are exported as various nodes under /sys/block/zram<id>/
A brief description of exported device attritbutes. For more details please
read Documentation/ABI/testing/sysfs-block-zram.
Name access description
---- ------ -----------
disksize RW show and set the device's disk size
initstate RO shows the initialization state of the device
reset WO trigger device reset
num_reads RO the number of reads
failed_reads RO the number of failed reads
num_write RO the number of writes
failed_writes RO the number of failed writes
invalid_io RO the number of non-page-size-aligned I/O requests
max_comp_streams RW the number of possible concurrent compress operations
comp_algorithm RW show and change the compression algorithm
notify_free RO the number of notifications to free pages (either
slot free notifications or REQ_DISCARD requests)
zero_pages RO the number of zero filled pages written to this disk
orig_data_size RO uncompressed size of data stored in this disk
compr_data_size RO compressed size of data stored in this disk
mem_used_total RO the amount of memory allocated for this disk
mem_used_max RW the maximum amount memory zram have consumed to
store compressed data
mem_limit RW the maximum amount of memory ZRAM can use to store
the compressed data
num_migrated RO the number of objects migrated migrated by compaction
WARNING
=======
per-stat sysfs attributes are considered to be deprecated.
The basic strategy is:
-- the existing RW nodes will be downgraded to WO nodes (in linux 4.11)
-- deprecated RO sysfs nodes will eventually be removed (in linux 4.11)
The list of deprecated attributes can be found here:
Documentation/ABI/obsolete/sysfs-block-zram
Basically, every attribute that has its own read accessible sysfs node
(e.g. num_reads) *AND* is accessible via one of the stat files (zram<id>/stat
or zram<id>/io_stat or zram<id>/mm_stat) is considered to be deprecated.
User space is advised to use the following files to read the device statistics.
File /sys/block/zram<id>/stat
Represents block layer statistics. Read Documentation/block/stat.txt for
details.
File /sys/block/zram<id>/io_stat
The stat file represents device's I/O statistics not accounted by block
layer and, thus, not available in zram<id>/stat file. It consists of a
single line of text and contains the following stats separated by
whitespace:
failed_reads
failed_writes
invalid_io
notify_free
File /sys/block/zram<id>/mm_stat
The stat file represents device's mm statistics. It consists of a single
line of text and contains the following stats separated by whitespace:
orig_data_size
compr_data_size
mem_used_total
mem_limit
mem_used_max
zero_pages
num_migrated
8) Deactivate:
swapoff /dev/zram0

View file

@ -523,6 +523,7 @@ prototypes:
void (*close)(struct vm_area_struct*);
int (*fault)(struct vm_area_struct*, struct vm_fault *);
int (*page_mkwrite)(struct vm_area_struct *, struct vm_fault *);
int (*pfn_mkwrite)(struct vm_area_struct *, struct vm_fault *);
int (*access)(struct vm_area_struct *, unsigned long, void*, int, int);
locking rules:
@ -532,6 +533,7 @@ close: yes
fault: yes can return with page locked
map_pages: yes
page_mkwrite: yes can return with page locked
pfn_mkwrite: yes
access: yes
->fault() is called when a previously not present pte is about
@ -558,6 +560,12 @@ the page has been truncated, the filesystem should not look up a new page
like the ->fault() handler, but simply return with VM_FAULT_NOPAGE, which
will cause the VM to retry the fault.
->pfn_mkwrite() is the same as page_mkwrite but when the pte is
VM_PFNMAP or VM_MIXEDMAP with a page-less entry. Expected return is
VM_FAULT_NOPAGE. Or one of the VM_FAULT_ERROR types. The default behavior
after this call is to make the pte read-write, unless pfn_mkwrite returns
an error.
->access() is called when get_user_pages() fails in
access_process_vm(), typically used to debug a process through
/proc/pid/mem or ptrace. This function is needed only for

View file

@ -8,6 +8,21 @@ If variable is of Type, use printk format specifier:
unsigned long long %llu or %llx
size_t %zu or %zx
ssize_t %zd or %zx
s32 %d or %x
u32 %u or %x
s64 %lld or %llx
u64 %llu or %llx
If <type> is dependent on a config option for its size (e.g., sector_t,
blkcnt_t) or is architecture-dependent for its size (e.g., tcflag_t), use a
format specifier of its largest possible type and explicitly cast to it.
Example:
printk("test: sector number/total blocks: %llu/%llu\n",
(unsigned long long)sector, (unsigned long long)blockcount);
Reminder: sizeof() result is of type size_t.
Raw pointer value SHOULD be printed with %p. The kernel supports
the following extended format specifiers for pointer types:
@ -54,6 +69,7 @@ Struct Resources:
For printing struct resources. The 'R' and 'r' specifiers result in a
printed resource with ('R') or without ('r') a decoded flags member.
Passed by reference.
Physical addresses types phys_addr_t:
@ -132,6 +148,8 @@ MAC/FDDI addresses:
specifier to use reversed byte order suitable for visual interpretation
of Bluetooth addresses which are in the little endian order.
Passed by reference.
IPv4 addresses:
%pI4 1.2.3.4
@ -146,6 +164,8 @@ IPv4 addresses:
host, network, big or little endian order addresses respectively. Where
no specifier is provided the default network/big endian order is used.
Passed by reference.
IPv6 addresses:
%pI6 0001:0002:0003:0004:0005:0006:0007:0008
@ -160,6 +180,8 @@ IPv6 addresses:
print a compressed IPv6 address as described by
http://tools.ietf.org/html/rfc5952
Passed by reference.
IPv4/IPv6 addresses (generic, with port, flowinfo, scope):
%pIS 1.2.3.4 or 0001:0002:0003:0004:0005:0006:0007:0008
@ -186,6 +208,8 @@ IPv4/IPv6 addresses (generic, with port, flowinfo, scope):
specifiers can be used as well and are ignored in case of an IPv6
address.
Passed by reference.
Further examples:
%pISfc 1.2.3.4 or [1:2:3:4:5:6:7:8]/123456789
@ -207,6 +231,8 @@ UUID/GUID addresses:
Where no additional specifiers are used the default little endian
order with lower case hex characters will be printed.
Passed by reference.
dentry names:
%pd{,2,3,4}
%pD{,2,3,4}
@ -216,6 +242,8 @@ dentry names:
equivalent of %s dentry->d_name.name we used to use, %pd<n> prints
n last components. %pD does the same thing for struct file.
Passed by reference.
struct va_format:
%pV
@ -231,23 +259,20 @@ struct va_format:
Do not use this feature without some mechanism to verify the
correctness of the format string and va_list arguments.
u64 SHOULD be printed with %llu/%llx:
Passed by reference.
printk("%llu", u64_var);
struct clk:
s64 SHOULD be printed with %lld/%llx:
%pC pll1
%pCn pll1
%pCr 1560000000
printk("%lld", s64_var);
For printing struct clk structures. '%pC' and '%pCn' print the name
(Common Clock Framework) or address (legacy clock framework) of the
structure; '%pCr' prints the current clock rate.
If <type> is dependent on a config option for its size (e.g., sector_t,
blkcnt_t) or is architecture-dependent for its size (e.g., tcflag_t), use a
format specifier of its largest possible type and explicitly cast to it.
Example:
Passed by reference.
printk("test: sector number/total blocks: %llu/%llu\n",
(unsigned long long)sector, (unsigned long long)blockcount);
Reminder: sizeof() result is of type size_t.
Thank you for your cooperation and attention.

View file

@ -21,6 +21,7 @@ Currently, these files are in /proc/sys/vm:
- admin_reserve_kbytes
- block_dump
- compact_memory
- compact_unevictable_allowed
- dirty_background_bytes
- dirty_background_ratio
- dirty_bytes
@ -106,6 +107,16 @@ huge pages although processes will also directly compact memory as required.
==============================================================
compact_unevictable_allowed
Available only when CONFIG_COMPACTION is set. When set to 1, compaction is
allowed to examine the unevictable lru (mlocked pages) for pages to compact.
This should be used on systems where stalls for minor page faults are an
acceptable trade for large contiguous free memory. Set to 0 to prevent
compaction from moving pages that are unevictable. Default value is 1.
==============================================================
dirty_background_bytes
Contains the amount of dirty memory at which the background kernel

View file

@ -267,21 +267,34 @@ call, then it is required that system administrator mount a file system of
type hugetlbfs:
mount -t hugetlbfs \
-o uid=<value>,gid=<value>,mode=<value>,size=<value>,nr_inodes=<value> \
none /mnt/huge
-o uid=<value>,gid=<value>,mode=<value>,pagesize=<value>,size=<value>,\
min_size=<value>,nr_inodes=<value> none /mnt/huge
This command mounts a (pseudo) filesystem of type hugetlbfs on the directory
/mnt/huge. Any files created on /mnt/huge uses huge pages. The uid and gid
options sets the owner and group of the root of the file system. By default
the uid and gid of the current process are taken. The mode option sets the
mode of root of file system to value & 01777. This value is given in octal.
By default the value 0755 is picked. The size option sets the maximum value of
memory (huge pages) allowed for that filesystem (/mnt/huge). The size is
rounded down to HPAGE_SIZE. The option nr_inodes sets the maximum number of
inodes that /mnt/huge can use. If the size or nr_inodes option is not
provided on command line then no limits are set. For size and nr_inodes
options, you can use [G|g]/[M|m]/[K|k] to represent giga/mega/kilo. For
example, size=2K has the same meaning as size=2048.
By default the value 0755 is picked. If the paltform supports multiple huge
page sizes, the pagesize option can be used to specify the huge page size and
associated pool. pagesize is specified in bytes. If pagesize is not specified
the paltform's default huge page size and associated pool will be used. The
size option sets the maximum value of memory (huge pages) allowed for that
filesystem (/mnt/huge). The size option can be specified in bytes, or as a
percentage of the specified huge page pool (nr_hugepages). The size is
rounded down to HPAGE_SIZE boundary. The min_size option sets the minimum
value of memory (huge pages) allowed for the filesystem. min_size can be
specified in the same way as size, either bytes or a percentage of the
huge page pool. At mount time, the number of huge pages specified by
min_size are reserved for use by the filesystem. If there are not enough
free huge pages available, the mount will fail. As huge pages are allocated
to the filesystem and freed, the reserve count is adjusted so that the sum
of allocated and reserved huge pages is always at least min_size. The option
nr_inodes sets the maximum number of inodes that /mnt/huge can use. If the
size, min_size or nr_inodes option is not provided on command line then
no limits are set. For pagesize, size, min_size and nr_inodes options, you
can use [G|g]/[M|m]/[K|k] to represent giga/mega/kilo. For example, size=2K
has the same meaning as size=2048.
While read system calls are supported on files that reside on hugetlb
file systems, write system calls are not.
@ -289,15 +302,23 @@ file systems, write system calls are not.
Regular chown, chgrp, and chmod commands (with right permissions) could be
used to change the file attributes on hugetlbfs.
Also, it is important to note that no such mount command is required if the
Also, it is important to note that no such mount command is required if
applications are going to use only shmat/shmget system calls or mmap with
MAP_HUGETLB. Users who wish to use hugetlb page via shared memory segment
should be a member of a supplementary group and system admin needs to
configure that gid into /proc/sys/vm/hugetlb_shm_group. It is possible for
same or different applications to use any combination of mmaps and shm*
calls, though the mount of filesystem will be required for using mmap calls
without MAP_HUGETLB. For an example of how to use mmap with MAP_HUGETLB see
map_hugetlb.c.
MAP_HUGETLB. For an example of how to use mmap with MAP_HUGETLB see map_hugetlb
below.
Users who wish to use hugetlb memory via shared memory segment should be a
member of a supplementary group and system admin needs to configure that gid
into /proc/sys/vm/hugetlb_shm_group. It is possible for same or different
applications to use any combination of mmaps and shm* calls, though the mount of
filesystem will be required for using mmap calls without MAP_HUGETLB.
Syscalls that operate on memory backed by hugetlb pages only have their lengths
aligned to the native page size of the processor; they will normally fail with
errno set to EINVAL or exclude hugetlb pages that extend beyond the length if
not hugepage aligned. For example, munmap(2) will fail if memory is backed by
a hugetlb page and the length is smaller than the hugepage size.
Examples
========

View file

@ -22,6 +22,7 @@ CONTENTS
- Filtering special vmas.
- munlock()/munlockall() system call handling.
- Migrating mlocked pages.
- Compacting mlocked pages.
- mmap(MAP_LOCKED) system call handling.
- munmap()/exit()/exec() system call handling.
- try_to_unmap().
@ -450,6 +451,17 @@ list because of a race between munlock and migration, page migration uses the
putback_lru_page() function to add migrated pages back to the LRU.
COMPACTING MLOCKED PAGES
------------------------
The unevictable LRU can be scanned for compactable regions and the default
behavior is to do so. /proc/sys/vm/compact_unevictable_allowed controls
this behavior (see Documentation/sysctl/vm.txt). Once scanning of the
unevictable LRU is enabled, the work of compaction is mostly handled by
the page migration code and the same work flow as described in MIGRATING
MLOCKED PAGES will apply.
mmap(MAP_LOCKED) SYSTEM CALL HANDLING
-------------------------------------

View file

@ -0,0 +1,70 @@
zsmalloc
--------
This allocator is designed for use with zram. Thus, the allocator is
supposed to work well under low memory conditions. In particular, it
never attempts higher order page allocation which is very likely to
fail under memory pressure. On the other hand, if we just use single
(0-order) pages, it would suffer from very high fragmentation --
any object of size PAGE_SIZE/2 or larger would occupy an entire page.
This was one of the major issues with its predecessor (xvmalloc).
To overcome these issues, zsmalloc allocates a bunch of 0-order pages
and links them together using various 'struct page' fields. These linked
pages act as a single higher-order page i.e. an object can span 0-order
page boundaries. The code refers to these linked pages as a single entity
called zspage.
For simplicity, zsmalloc can only allocate objects of size up to PAGE_SIZE
since this satisfies the requirements of all its current users (in the
worst case, page is incompressible and is thus stored "as-is" i.e. in
uncompressed form). For allocation requests larger than this size, failure
is returned (see zs_malloc).
Additionally, zs_malloc() does not return a dereferenceable pointer.
Instead, it returns an opaque handle (unsigned long) which encodes actual
location of the allocated object. The reason for this indirection is that
zsmalloc does not keep zspages permanently mapped since that would cause
issues on 32-bit systems where the VA region for kernel space mappings
is very small. So, before using the allocating memory, the object has to
be mapped using zs_map_object() to get a usable pointer and subsequently
unmapped using zs_unmap_object().
stat
----
With CONFIG_ZSMALLOC_STAT, we could see zsmalloc internal information via
/sys/kernel/debug/zsmalloc/<user name>. Here is a sample of stat output:
# cat /sys/kernel/debug/zsmalloc/zram0/classes
class size almost_full almost_empty obj_allocated obj_used pages_used pages_per_zspage
..
..
9 176 0 1 186 129 8 4
10 192 1 0 2880 2872 135 3
11 208 0 1 819 795 42 2
12 224 0 1 219 159 12 4
..
..
class: index
size: object size zspage stores
almost_empty: the number of ZS_ALMOST_EMPTY zspages(see below)
almost_full: the number of ZS_ALMOST_FULL zspages(see below)
obj_allocated: the number of objects allocated
obj_used: the number of objects allocated to the user
pages_used: the number of pages allocated for the class
pages_per_zspage: the number of 0-order pages to make a zspage
We assign a zspage to ZS_ALMOST_EMPTY fullness group when:
n <= N / f, where
n = number of allocated objects
N = total number of objects zspage can store
f = fullness_threshold_frac(ie, 4 at the moment)
Similarly, we assign zspage to:
ZS_ALMOST_FULL when n > N / f
ZS_EMPTY when n == 0
ZS_FULL when n == N

View file

@ -625,16 +625,16 @@ F: drivers/iommu/amd_iommu*.[ch]
F: include/linux/amd-iommu.h
AMD KFD
M: Oded Gabbay <oded.gabbay@amd.com>
L: dri-devel@lists.freedesktop.org
T: git git://people.freedesktop.org/~gabbayo/linux.git
S: Supported
F: drivers/gpu/drm/amd/amdkfd/
M: Oded Gabbay <oded.gabbay@amd.com>
L: dri-devel@lists.freedesktop.org
T: git git://people.freedesktop.org/~gabbayo/linux.git
S: Supported
F: drivers/gpu/drm/amd/amdkfd/
F: drivers/gpu/drm/amd/include/cik_structs.h
F: drivers/gpu/drm/amd/include/kgd_kfd_interface.h
F: drivers/gpu/drm/radeon/radeon_kfd.c
F: drivers/gpu/drm/radeon/radeon_kfd.h
F: include/uapi/linux/kfd_ioctl.h
F: drivers/gpu/drm/radeon/radeon_kfd.c
F: drivers/gpu/drm/radeon/radeon_kfd.h
F: include/uapi/linux/kfd_ioctl.h
AMD MICROCODE UPDATE SUPPORT
M: Borislav Petkov <bp@alien8.de>
@ -1915,16 +1915,14 @@ S: Maintained
F: drivers/media/radio/radio-aztech*
B43 WIRELESS DRIVER
M: Stefano Brivio <stefano.brivio@polimi.it>
L: linux-wireless@vger.kernel.org
L: b43-dev@lists.infradead.org
W: http://wireless.kernel.org/en/users/Drivers/b43
S: Maintained
S: Odd Fixes
F: drivers/net/wireless/b43/
B43LEGACY WIRELESS DRIVER
M: Larry Finger <Larry.Finger@lwfinger.net>
M: Stefano Brivio <stefano.brivio@polimi.it>
L: linux-wireless@vger.kernel.org
L: b43-dev@lists.infradead.org
W: http://wireless.kernel.org/en/users/Drivers/b43
@ -1967,10 +1965,10 @@ F: Documentation/filesystems/befs.txt
F: fs/befs/
BECKHOFF CX5020 ETHERCAT MASTER DRIVER
M: Dariusz Marcinkiewicz <reksio@newterm.pl>
L: netdev@vger.kernel.org
S: Maintained
F: drivers/net/ethernet/ec_bhf.c
M: Dariusz Marcinkiewicz <reksio@newterm.pl>
L: netdev@vger.kernel.org
S: Maintained
F: drivers/net/ethernet/ec_bhf.c
BFS FILE SYSTEM
M: "Tigran A. Aivazian" <tigran@aivazian.fsnet.co.uk>
@ -2896,11 +2894,11 @@ S: Supported
F: drivers/net/ethernet/chelsio/cxgb3/
CXGB3 ISCSI DRIVER (CXGB3I)
M: Karen Xie <kxie@chelsio.com>
L: linux-scsi@vger.kernel.org
W: http://www.chelsio.com
S: Supported
F: drivers/scsi/cxgbi/cxgb3i
M: Karen Xie <kxie@chelsio.com>
L: linux-scsi@vger.kernel.org
W: http://www.chelsio.com
S: Supported
F: drivers/scsi/cxgbi/cxgb3i
CXGB3 IWARP RNIC DRIVER (IW_CXGB3)
M: Steve Wise <swise@chelsio.com>
@ -2917,11 +2915,11 @@ S: Supported
F: drivers/net/ethernet/chelsio/cxgb4/
CXGB4 ISCSI DRIVER (CXGB4I)
M: Karen Xie <kxie@chelsio.com>
L: linux-scsi@vger.kernel.org
W: http://www.chelsio.com
S: Supported
F: drivers/scsi/cxgbi/cxgb4i
M: Karen Xie <kxie@chelsio.com>
L: linux-scsi@vger.kernel.org
W: http://www.chelsio.com
S: Supported
F: drivers/scsi/cxgbi/cxgb4i
CXGB4 IWARP RNIC DRIVER (IW_CXGB4)
M: Steve Wise <swise@chelsio.com>
@ -5223,7 +5221,7 @@ F: arch/x86/kernel/tboot.c
INTEL WIRELESS WIMAX CONNECTION 2400
M: Inaky Perez-Gonzalez <inaky.perez-gonzalez@intel.com>
M: linux-wimax@intel.com
L: wimax@linuxwimax.org (subscribers-only)
L: wimax@linuxwimax.org (subscribers-only)
S: Supported
W: http://linuxwimax.org
F: Documentation/wimax/README.i2400m
@ -5926,7 +5924,7 @@ F: arch/powerpc/platforms/512x/
F: arch/powerpc/platforms/52xx/
LINUX FOR POWERPC EMBEDDED PPC4XX
M: Alistair Popple <alistair@popple.id.au>
M: Alistair Popple <alistair@popple.id.au>
M: Matt Porter <mporter@kernel.crashing.org>
W: http://www.penguinppc.org/
L: linuxppc-dev@lists.ozlabs.org
@ -6399,7 +6397,7 @@ S: Supported
F: drivers/watchdog/mena21_wdt.c
MEN CHAMELEON BUS (mcb)
M: Johannes Thumshirn <johannes.thumshirn@men.de>
M: Johannes Thumshirn <johannes.thumshirn@men.de>
S: Supported
F: drivers/mcb/
F: include/linux/mcb.h
@ -7955,10 +7953,10 @@ L: rtc-linux@googlegroups.com
S: Maintained
QAT DRIVER
M: Tadeusz Struk <tadeusz.struk@intel.com>
L: qat-linux@intel.com
S: Supported
F: drivers/crypto/qat/
M: Tadeusz Struk <tadeusz.struk@intel.com>
L: qat-linux@intel.com
S: Supported
F: drivers/crypto/qat/
QIB DRIVER
M: Mike Marciniszyn <infinipath@intel.com>
@ -10129,11 +10127,11 @@ F: include/linux/cdrom.h
F: include/uapi/linux/cdrom.h
UNISYS S-PAR DRIVERS
M: Benjamin Romer <benjamin.romer@unisys.com>
M: David Kershner <david.kershner@unisys.com>
L: sparmaintainer@unisys.com (Unisys internal)
S: Supported
F: drivers/staging/unisys/
M: Benjamin Romer <benjamin.romer@unisys.com>
M: David Kershner <david.kershner@unisys.com>
L: sparmaintainer@unisys.com (Unisys internal)
S: Supported
F: drivers/staging/unisys/
UNIVERSAL FLASH STORAGE HOST CONTROLLER DRIVER
M: Vinayak Holikatti <vinholikatti@gmail.com>
@ -10690,7 +10688,7 @@ F: drivers/media/rc/winbond-cir.c
WIMAX STACK
M: Inaky Perez-Gonzalez <inaky.perez-gonzalez@intel.com>
M: linux-wimax@intel.com
L: wimax@linuxwimax.org (subscribers-only)
L: wimax@linuxwimax.org (subscribers-only)
S: Supported
W: http://linuxwimax.org
F: Documentation/wimax/README.wimax
@ -10981,6 +10979,7 @@ L: linux-mm@kvack.org
S: Maintained
F: mm/zsmalloc.c
F: include/linux/zsmalloc.h
F: Documentation/vm/zsmalloc.txt
ZSWAP COMPRESSED SWAP CACHING
M: Seth Jennings <sjennings@variantweb.net>

View file

@ -51,19 +51,19 @@ static struct dentry *dbgfs_root, *dbgfs_state, **dbgfs_chan;
static int dbg_show_requester_chan(struct seq_file *s, void *p)
{
int pos = 0;
int chan = (int)s->private;
int i;
u32 drcmr;
pos += seq_printf(s, "DMA channel %d requesters list :\n", chan);
seq_printf(s, "DMA channel %d requesters list :\n", chan);
for (i = 0; i < DMA_MAX_REQUESTERS; i++) {
drcmr = DRCMR(i);
if ((drcmr & DRCMR_CHLNUM) == chan)
pos += seq_printf(s, "\tRequester %d (MAPVLD=%d)\n", i,
!!(drcmr & DRCMR_MAPVLD));
seq_printf(s, "\tRequester %d (MAPVLD=%d)\n",
i, !!(drcmr & DRCMR_MAPVLD));
}
return pos;
return 0;
}
static inline int dbg_burst_from_dcmd(u32 dcmd)
@ -83,7 +83,6 @@ static int is_phys_valid(unsigned long addr)
static int dbg_show_descriptors(struct seq_file *s, void *p)
{
int pos = 0;
int chan = (int)s->private;
int i, max_show = 20, burst, width;
u32 dcmd;
@ -94,44 +93,43 @@ static int dbg_show_descriptors(struct seq_file *s, void *p)
spin_lock_irqsave(&dma_channels[chan].lock, flags);
phys_desc = DDADR(chan);
pos += seq_printf(s, "DMA channel %d descriptors :\n", chan);
pos += seq_printf(s, "[%03d] First descriptor unknown\n", 0);
seq_printf(s, "DMA channel %d descriptors :\n", chan);
seq_printf(s, "[%03d] First descriptor unknown\n", 0);
for (i = 1; i < max_show && is_phys_valid(phys_desc); i++) {
desc = phys_to_virt(phys_desc);
dcmd = desc->dcmd;
burst = dbg_burst_from_dcmd(dcmd);
width = (1 << ((dcmd >> 14) & 0x3)) >> 1;
pos += seq_printf(s, "[%03d] Desc at %08lx(virt %p)\n",
i, phys_desc, desc);
pos += seq_printf(s, "\tDDADR = %08x\n", desc->ddadr);
pos += seq_printf(s, "\tDSADR = %08x\n", desc->dsadr);
pos += seq_printf(s, "\tDTADR = %08x\n", desc->dtadr);
pos += seq_printf(s, "\tDCMD = %08x (%s%s%s%s%s%s%sburst=%d"
" width=%d len=%d)\n",
dcmd,
DCMD_STR(INCSRCADDR), DCMD_STR(INCTRGADDR),
DCMD_STR(FLOWSRC), DCMD_STR(FLOWTRG),
DCMD_STR(STARTIRQEN), DCMD_STR(ENDIRQEN),
DCMD_STR(ENDIAN), burst, width,
dcmd & DCMD_LENGTH);
seq_printf(s, "[%03d] Desc at %08lx(virt %p)\n",
i, phys_desc, desc);
seq_printf(s, "\tDDADR = %08x\n", desc->ddadr);
seq_printf(s, "\tDSADR = %08x\n", desc->dsadr);
seq_printf(s, "\tDTADR = %08x\n", desc->dtadr);
seq_printf(s, "\tDCMD = %08x (%s%s%s%s%s%s%sburst=%d width=%d len=%d)\n",
dcmd,
DCMD_STR(INCSRCADDR), DCMD_STR(INCTRGADDR),
DCMD_STR(FLOWSRC), DCMD_STR(FLOWTRG),
DCMD_STR(STARTIRQEN), DCMD_STR(ENDIRQEN),
DCMD_STR(ENDIAN), burst, width,
dcmd & DCMD_LENGTH);
phys_desc = desc->ddadr;
}
if (i == max_show)
pos += seq_printf(s, "[%03d] Desc at %08lx ... max display reached\n",
i, phys_desc);
seq_printf(s, "[%03d] Desc at %08lx ... max display reached\n",
i, phys_desc);
else
pos += seq_printf(s, "[%03d] Desc at %08lx is %s\n",
i, phys_desc, phys_desc == DDADR_STOP ?
"DDADR_STOP" : "invalid");
seq_printf(s, "[%03d] Desc at %08lx is %s\n",
i, phys_desc, phys_desc == DDADR_STOP ?
"DDADR_STOP" : "invalid");
spin_unlock_irqrestore(&dma_channels[chan].lock, flags);
return pos;
return 0;
}
static int dbg_show_chan_state(struct seq_file *s, void *p)
{
int pos = 0;
int chan = (int)s->private;
u32 dcsr, dcmd;
int burst, width;
@ -142,42 +140,39 @@ static int dbg_show_chan_state(struct seq_file *s, void *p)
burst = dbg_burst_from_dcmd(dcmd);
width = (1 << ((dcmd >> 14) & 0x3)) >> 1;
pos += seq_printf(s, "DMA channel %d\n", chan);
pos += seq_printf(s, "\tPriority : %s\n",
str_prio[dma_channels[chan].prio]);
pos += seq_printf(s, "\tUnaligned transfer bit: %s\n",
DALGN & (1 << chan) ? "yes" : "no");
pos += seq_printf(s, "\tDCSR = %08x (%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s)\n",
dcsr, DCSR_STR(RUN), DCSR_STR(NODESC),
DCSR_STR(STOPIRQEN), DCSR_STR(EORIRQEN),
DCSR_STR(EORJMPEN), DCSR_STR(EORSTOPEN),
DCSR_STR(SETCMPST), DCSR_STR(CLRCMPST),
DCSR_STR(CMPST), DCSR_STR(EORINTR), DCSR_STR(REQPEND),
DCSR_STR(STOPSTATE), DCSR_STR(ENDINTR),
DCSR_STR(STARTINTR), DCSR_STR(BUSERR));
seq_printf(s, "DMA channel %d\n", chan);
seq_printf(s, "\tPriority : %s\n", str_prio[dma_channels[chan].prio]);
seq_printf(s, "\tUnaligned transfer bit: %s\n",
DALGN & (1 << chan) ? "yes" : "no");
seq_printf(s, "\tDCSR = %08x (%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s)\n",
dcsr, DCSR_STR(RUN), DCSR_STR(NODESC),
DCSR_STR(STOPIRQEN), DCSR_STR(EORIRQEN),
DCSR_STR(EORJMPEN), DCSR_STR(EORSTOPEN),
DCSR_STR(SETCMPST), DCSR_STR(CLRCMPST),
DCSR_STR(CMPST), DCSR_STR(EORINTR), DCSR_STR(REQPEND),
DCSR_STR(STOPSTATE), DCSR_STR(ENDINTR),
DCSR_STR(STARTINTR), DCSR_STR(BUSERR));
pos += seq_printf(s, "\tDCMD = %08x (%s%s%s%s%s%s%sburst=%d width=%d"
" len=%d)\n",
dcmd,
DCMD_STR(INCSRCADDR), DCMD_STR(INCTRGADDR),
DCMD_STR(FLOWSRC), DCMD_STR(FLOWTRG),
DCMD_STR(STARTIRQEN), DCMD_STR(ENDIRQEN),
DCMD_STR(ENDIAN), burst, width, dcmd & DCMD_LENGTH);
pos += seq_printf(s, "\tDSADR = %08x\n", DSADR(chan));
pos += seq_printf(s, "\tDTADR = %08x\n", DTADR(chan));
pos += seq_printf(s, "\tDDADR = %08x\n", DDADR(chan));
return pos;
seq_printf(s, "\tDCMD = %08x (%s%s%s%s%s%s%sburst=%d width=%d len=%d)\n",
dcmd,
DCMD_STR(INCSRCADDR), DCMD_STR(INCTRGADDR),
DCMD_STR(FLOWSRC), DCMD_STR(FLOWTRG),
DCMD_STR(STARTIRQEN), DCMD_STR(ENDIRQEN),
DCMD_STR(ENDIAN), burst, width, dcmd & DCMD_LENGTH);
seq_printf(s, "\tDSADR = %08x\n", DSADR(chan));
seq_printf(s, "\tDTADR = %08x\n", DTADR(chan));
seq_printf(s, "\tDDADR = %08x\n", DDADR(chan));
return 0;
}
static int dbg_show_state(struct seq_file *s, void *p)
{
int pos = 0;
/* basic device status */
pos += seq_printf(s, "DMA engine status\n");
pos += seq_printf(s, "\tChannel number: %d\n", num_dma_channels);
seq_puts(s, "DMA engine status\n");
seq_printf(s, "\tChannel number: %d\n", num_dma_channels);
return pos;
return 0;
}
#define DBGFS_FUNC_DECL(name) \

View file

@ -527,7 +527,8 @@ static int proc_fasttimer_show(struct seq_file *m, void *v)
i = debug_log_cnt;
while (i != end_i || debug_log_cnt_wrapped) {
if (seq_printf(m, debug_log_string[i], debug_log_value[i]) < 0)
seq_printf(m, debug_log_string[i], debug_log_value[i]);
if (seq_has_overflowed(m))
return 0;
i = (i+1) % DEBUG_LOG_MAX;
}
@ -542,24 +543,22 @@ static int proc_fasttimer_show(struct seq_file *m, void *v)
int cur = (fast_timers_started - i - 1) % NUM_TIMER_STATS;
#if 1 //ndef FAST_TIMER_LOG
seq_printf(m, "div: %i freq: %i delay: %i"
"\n",
seq_printf(m, "div: %i freq: %i delay: %i\n",
timer_div_settings[cur],
timer_freq_settings[cur],
timer_delay_settings[cur]);
#endif
#ifdef FAST_TIMER_LOG
t = &timer_started_log[cur];
if (seq_printf(m, "%-14s s: %6lu.%06lu e: %6lu.%06lu "
"d: %6li us data: 0x%08lX"
"\n",
t->name,
(unsigned long)t->tv_set.tv_jiff,
(unsigned long)t->tv_set.tv_usec,
(unsigned long)t->tv_expires.tv_jiff,
(unsigned long)t->tv_expires.tv_usec,
t->delay_us,
t->data) < 0)
seq_printf(m, "%-14s s: %6lu.%06lu e: %6lu.%06lu d: %6li us data: 0x%08lX\n",
t->name,
(unsigned long)t->tv_set.tv_jiff,
(unsigned long)t->tv_set.tv_usec,
(unsigned long)t->tv_expires.tv_jiff,
(unsigned long)t->tv_expires.tv_usec,
t->delay_us,
t->data);
if (seq_has_overflowed(m))
return 0;
#endif
}
@ -571,16 +570,15 @@ static int proc_fasttimer_show(struct seq_file *m, void *v)
seq_printf(m, "Timers added: %i\n", fast_timers_added);
for (i = 0; i < num_to_show; i++) {
t = &timer_added_log[(fast_timers_added - i - 1) % NUM_TIMER_STATS];
if (seq_printf(m, "%-14s s: %6lu.%06lu e: %6lu.%06lu "
"d: %6li us data: 0x%08lX"
"\n",
t->name,
(unsigned long)t->tv_set.tv_jiff,
(unsigned long)t->tv_set.tv_usec,
(unsigned long)t->tv_expires.tv_jiff,
(unsigned long)t->tv_expires.tv_usec,
t->delay_us,
t->data) < 0)
seq_printf(m, "%-14s s: %6lu.%06lu e: %6lu.%06lu d: %6li us data: 0x%08lX\n",
t->name,
(unsigned long)t->tv_set.tv_jiff,
(unsigned long)t->tv_set.tv_usec,
(unsigned long)t->tv_expires.tv_jiff,
(unsigned long)t->tv_expires.tv_usec,
t->delay_us,
t->data);
if (seq_has_overflowed(m))
return 0;
}
seq_putc(m, '\n');
@ -590,16 +588,15 @@ static int proc_fasttimer_show(struct seq_file *m, void *v)
seq_printf(m, "Timers expired: %i\n", fast_timers_expired);
for (i = 0; i < num_to_show; i++) {
t = &timer_expired_log[(fast_timers_expired - i - 1) % NUM_TIMER_STATS];
if (seq_printf(m, "%-14s s: %6lu.%06lu e: %6lu.%06lu "
"d: %6li us data: 0x%08lX"
"\n",
t->name,
(unsigned long)t->tv_set.tv_jiff,
(unsigned long)t->tv_set.tv_usec,
(unsigned long)t->tv_expires.tv_jiff,
(unsigned long)t->tv_expires.tv_usec,
t->delay_us,
t->data) < 0)
seq_printf(m, "%-14s s: %6lu.%06lu e: %6lu.%06lu d: %6li us data: 0x%08lX\n",
t->name,
(unsigned long)t->tv_set.tv_jiff,
(unsigned long)t->tv_set.tv_usec,
(unsigned long)t->tv_expires.tv_jiff,
(unsigned long)t->tv_expires.tv_usec,
t->delay_us,
t->data);
if (seq_has_overflowed(m))
return 0;
}
seq_putc(m, '\n');
@ -611,19 +608,15 @@ static int proc_fasttimer_show(struct seq_file *m, void *v)
while (t) {
nextt = t->next;
local_irq_restore(flags);
if (seq_printf(m, "%-14s s: %6lu.%06lu e: %6lu.%06lu "
"d: %6li us data: 0x%08lX"
/* " func: 0x%08lX" */
"\n",
t->name,
(unsigned long)t->tv_set.tv_jiff,
(unsigned long)t->tv_set.tv_usec,
(unsigned long)t->tv_expires.tv_jiff,
(unsigned long)t->tv_expires.tv_usec,
t->delay_us,
t->data
/* , t->function */
) < 0)
seq_printf(m, "%-14s s: %6lu.%06lu e: %6lu.%06lu d: %6li us data: 0x%08lX\n",
t->name,
(unsigned long)t->tv_set.tv_jiff,
(unsigned long)t->tv_set.tv_usec,
(unsigned long)t->tv_expires.tv_jiff,
(unsigned long)t->tv_expires.tv_usec,
t->delay_us,
t->data);
if (seq_has_overflowed(m))
return 0;
local_irq_save(flags);
if (t->next != nextt)

View file

@ -63,35 +63,37 @@ int show_cpuinfo(struct seq_file *m, void *v)
else
info = &cpu_info[revision];
return seq_printf(m,
"processor\t: 0\n"
"cpu\t\t: CRIS\n"
"cpu revision\t: %lu\n"
"cpu model\t: %s\n"
"cache size\t: %d kB\n"
"fpu\t\t: %s\n"
"mmu\t\t: %s\n"
"mmu DMA bug\t: %s\n"
"ethernet\t: %s Mbps\n"
"token ring\t: %s\n"
"scsi\t\t: %s\n"
"ata\t\t: %s\n"
"usb\t\t: %s\n"
"bogomips\t: %lu.%02lu\n",
seq_printf(m,
"processor\t: 0\n"
"cpu\t\t: CRIS\n"
"cpu revision\t: %lu\n"
"cpu model\t: %s\n"
"cache size\t: %d kB\n"
"fpu\t\t: %s\n"
"mmu\t\t: %s\n"
"mmu DMA bug\t: %s\n"
"ethernet\t: %s Mbps\n"
"token ring\t: %s\n"
"scsi\t\t: %s\n"
"ata\t\t: %s\n"
"usb\t\t: %s\n"
"bogomips\t: %lu.%02lu\n",
revision,
info->model,
info->cache,
info->flags & HAS_FPU ? "yes" : "no",
info->flags & HAS_MMU ? "yes" : "no",
info->flags & HAS_MMU_BUG ? "yes" : "no",
info->flags & HAS_ETHERNET100 ? "10/100" : "10",
info->flags & HAS_TOKENRING ? "4/16 Mbps" : "no",
info->flags & HAS_SCSI ? "yes" : "no",
info->flags & HAS_ATA ? "yes" : "no",
info->flags & HAS_USB ? "yes" : "no",
(loops_per_jiffy * HZ + 500) / 500000,
((loops_per_jiffy * HZ + 500) / 5000) % 100);
revision,
info->model,
info->cache,
info->flags & HAS_FPU ? "yes" : "no",
info->flags & HAS_MMU ? "yes" : "no",
info->flags & HAS_MMU_BUG ? "yes" : "no",
info->flags & HAS_ETHERNET100 ? "10/100" : "10",
info->flags & HAS_TOKENRING ? "4/16 Mbps" : "no",
info->flags & HAS_SCSI ? "yes" : "no",
info->flags & HAS_ATA ? "yes" : "no",
info->flags & HAS_USB ? "yes" : "no",
(loops_per_jiffy * HZ + 500) / 500000,
((loops_per_jiffy * HZ + 500) / 5000) % 100);
return 0;
}
#endif /* CONFIG_PROC_FS */

View file

@ -501,7 +501,8 @@ static int proc_fasttimer_show(struct seq_file *m, void *v)
i = debug_log_cnt;
while ((i != end_i || debug_log_cnt_wrapped)) {
if (seq_printf(m, debug_log_string[i], debug_log_value[i]) < 0)
seq_printf(m, debug_log_string[i], debug_log_value[i]);
if (seq_has_overflowed(m))
return 0;
i = (i+1) % DEBUG_LOG_MAX;
}
@ -516,23 +517,21 @@ static int proc_fasttimer_show(struct seq_file *m, void *v)
int cur = (fast_timers_started - i - 1) % NUM_TIMER_STATS;
#if 1 //ndef FAST_TIMER_LOG
seq_printf(m, "div: %i delay: %i"
"\n",
seq_printf(m, "div: %i delay: %i\n",
timer_div_settings[cur],
timer_delay_settings[cur]);
#endif
#ifdef FAST_TIMER_LOG
t = &timer_started_log[cur];
if (seq_printf(m, "%-14s s: %6lu.%06lu e: %6lu.%06lu "
"d: %6li us data: 0x%08lX"
"\n",
t->name,
(unsigned long)t->tv_set.tv_jiff,
(unsigned long)t->tv_set.tv_usec,
(unsigned long)t->tv_expires.tv_jiff,
(unsigned long)t->tv_expires.tv_usec,
t->delay_us,
t->data) < 0)
seq_printf(m, "%-14s s: %6lu.%06lu e: %6lu.%06lu d: %6li us data: 0x%08lX\n",
t->name,
(unsigned long)t->tv_set.tv_jiff,
(unsigned long)t->tv_set.tv_usec,
(unsigned long)t->tv_expires.tv_jiff,
(unsigned long)t->tv_expires.tv_usec,
t->delay_us,
t->data);
if (seq_has_overflowed(m))
return 0;
#endif
}
@ -544,16 +543,15 @@ static int proc_fasttimer_show(struct seq_file *m, void *v)
seq_printf(m, "Timers added: %i\n", fast_timers_added);
for (i = 0; i < num_to_show; i++) {
t = &timer_added_log[(fast_timers_added - i - 1) % NUM_TIMER_STATS];
if (seq_printf(m, "%-14s s: %6lu.%06lu e: %6lu.%06lu "
"d: %6li us data: 0x%08lX"
"\n",
t->name,
(unsigned long)t->tv_set.tv_jiff,
(unsigned long)t->tv_set.tv_usec,
(unsigned long)t->tv_expires.tv_jiff,
(unsigned long)t->tv_expires.tv_usec,
t->delay_us,
t->data) < 0)
seq_printf(m, "%-14s s: %6lu.%06lu e: %6lu.%06lu d: %6li us data: 0x%08lX\n",
t->name,
(unsigned long)t->tv_set.tv_jiff,
(unsigned long)t->tv_set.tv_usec,
(unsigned long)t->tv_expires.tv_jiff,
(unsigned long)t->tv_expires.tv_usec,
t->delay_us,
t->data);
if (seq_has_overflowed(m))
return 0;
}
seq_putc(m, '\n');
@ -563,16 +561,15 @@ static int proc_fasttimer_show(struct seq_file *m, void *v)
seq_printf(m, "Timers expired: %i\n", fast_timers_expired);
for (i = 0; i < num_to_show; i++){
t = &timer_expired_log[(fast_timers_expired - i - 1) % NUM_TIMER_STATS];
if (seq_printf(m, "%-14s s: %6lu.%06lu e: %6lu.%06lu "
"d: %6li us data: 0x%08lX"
"\n",
t->name,
(unsigned long)t->tv_set.tv_jiff,
(unsigned long)t->tv_set.tv_usec,
(unsigned long)t->tv_expires.tv_jiff,
(unsigned long)t->tv_expires.tv_usec,
t->delay_us,
t->data) < 0)
seq_printf(m, "%-14s s: %6lu.%06lu e: %6lu.%06lu d: %6li us data: 0x%08lX\n",
t->name,
(unsigned long)t->tv_set.tv_jiff,
(unsigned long)t->tv_set.tv_usec,
(unsigned long)t->tv_expires.tv_jiff,
(unsigned long)t->tv_expires.tv_usec,
t->delay_us,
t->data);
if (seq_has_overflowed(m))
return 0;
}
seq_putc(m, '\n');
@ -584,19 +581,15 @@ static int proc_fasttimer_show(struct seq_file *m, void *v)
while (t != NULL){
nextt = t->next;
local_irq_restore(flags);
if (seq_printf(m, "%-14s s: %6lu.%06lu e: %6lu.%06lu "
"d: %6li us data: 0x%08lX"
/* " func: 0x%08lX" */
"\n",
t->name,
(unsigned long)t->tv_set.tv_jiff,
(unsigned long)t->tv_set.tv_usec,
(unsigned long)t->tv_expires.tv_jiff,
(unsigned long)t->tv_expires.tv_usec,
t->delay_us,
t->data
/* , t->function */
) < 0)
seq_printf(m, "%-14s s: %6lu.%06lu e: %6lu.%06lu d: %6li us data: 0x%08lX\n",
t->name,
(unsigned long)t->tv_set.tv_jiff,
(unsigned long)t->tv_set.tv_usec,
(unsigned long)t->tv_expires.tv_jiff,
(unsigned long)t->tv_expires.tv_usec,
t->delay_us,
t->data);
if (seq_has_overflowed(m))
return 0;
local_irq_save(flags);
if (t->next != nextt)

View file

@ -77,36 +77,38 @@ int show_cpuinfo(struct seq_file *m, void *v)
}
}
return seq_printf(m,
"processor\t: %d\n"
"cpu\t\t: CRIS\n"
"cpu revision\t: %lu\n"
"cpu model\t: %s\n"
"cache size\t: %d KB\n"
"fpu\t\t: %s\n"
"mmu\t\t: %s\n"
"mmu DMA bug\t: %s\n"
"ethernet\t: %s Mbps\n"
"token ring\t: %s\n"
"scsi\t\t: %s\n"
"ata\t\t: %s\n"
"usb\t\t: %s\n"
"bogomips\t: %lu.%02lu\n\n",
seq_printf(m,
"processor\t: %d\n"
"cpu\t\t: CRIS\n"
"cpu revision\t: %lu\n"
"cpu model\t: %s\n"
"cache size\t: %d KB\n"
"fpu\t\t: %s\n"
"mmu\t\t: %s\n"
"mmu DMA bug\t: %s\n"
"ethernet\t: %s Mbps\n"
"token ring\t: %s\n"
"scsi\t\t: %s\n"
"ata\t\t: %s\n"
"usb\t\t: %s\n"
"bogomips\t: %lu.%02lu\n\n",
cpu,
revision,
info->cpu_model,
info->cache_size,
info->flags & HAS_FPU ? "yes" : "no",
info->flags & HAS_MMU ? "yes" : "no",
info->flags & HAS_MMU_BUG ? "yes" : "no",
info->flags & HAS_ETHERNET100 ? "10/100" : "10",
info->flags & HAS_TOKENRING ? "4/16 Mbps" : "no",
info->flags & HAS_SCSI ? "yes" : "no",
info->flags & HAS_ATA ? "yes" : "no",
info->flags & HAS_USB ? "yes" : "no",
(loops_per_jiffy * HZ + 500) / 500000,
((loops_per_jiffy * HZ + 500) / 5000) % 100);
cpu,
revision,
info->cpu_model,
info->cache_size,
info->flags & HAS_FPU ? "yes" : "no",
info->flags & HAS_MMU ? "yes" : "no",
info->flags & HAS_MMU_BUG ? "yes" : "no",
info->flags & HAS_ETHERNET100 ? "10/100" : "10",
info->flags & HAS_TOKENRING ? "4/16 Mbps" : "no",
info->flags & HAS_SCSI ? "yes" : "no",
info->flags & HAS_ATA ? "yes" : "no",
info->flags & HAS_USB ? "yes" : "no",
(loops_per_jiffy * HZ + 500) / 500000,
((loops_per_jiffy * HZ + 500) / 5000) % 100);
return 0;
}
#endif /* CONFIG_PROC_FS */

View file

@ -27,7 +27,6 @@
static int show_cpuinfo(struct seq_file *m, void *v)
{
int count = 0;
char *fpga_family = "Unknown";
char *cpu_ver = "Unknown";
int i;
@ -48,91 +47,89 @@ static int show_cpuinfo(struct seq_file *m, void *v)
}
}
count = seq_printf(m,
"CPU-Family: MicroBlaze\n"
"FPGA-Arch: %s\n"
"CPU-Ver: %s, %s endian\n"
"CPU-MHz: %d.%02d\n"
"BogoMips: %lu.%02lu\n",
fpga_family,
cpu_ver,
cpuinfo.endian ? "little" : "big",
cpuinfo.cpu_clock_freq /
1000000,
cpuinfo.cpu_clock_freq %
1000000,
loops_per_jiffy / (500000 / HZ),
(loops_per_jiffy / (5000 / HZ)) % 100);
seq_printf(m,
"CPU-Family: MicroBlaze\n"
"FPGA-Arch: %s\n"
"CPU-Ver: %s, %s endian\n"
"CPU-MHz: %d.%02d\n"
"BogoMips: %lu.%02lu\n",
fpga_family,
cpu_ver,
cpuinfo.endian ? "little" : "big",
cpuinfo.cpu_clock_freq / 1000000,
cpuinfo.cpu_clock_freq % 1000000,
loops_per_jiffy / (500000 / HZ),
(loops_per_jiffy / (5000 / HZ)) % 100);
count += seq_printf(m,
"HW:\n Shift:\t\t%s\n"
" MSR:\t\t%s\n"
" PCMP:\t\t%s\n"
" DIV:\t\t%s\n",
(cpuinfo.use_instr & PVR0_USE_BARREL_MASK) ? "yes" : "no",
(cpuinfo.use_instr & PVR2_USE_MSR_INSTR) ? "yes" : "no",
(cpuinfo.use_instr & PVR2_USE_PCMP_INSTR) ? "yes" : "no",
(cpuinfo.use_instr & PVR0_USE_DIV_MASK) ? "yes" : "no");
seq_printf(m,
"HW:\n Shift:\t\t%s\n"
" MSR:\t\t%s\n"
" PCMP:\t\t%s\n"
" DIV:\t\t%s\n",
(cpuinfo.use_instr & PVR0_USE_BARREL_MASK) ? "yes" : "no",
(cpuinfo.use_instr & PVR2_USE_MSR_INSTR) ? "yes" : "no",
(cpuinfo.use_instr & PVR2_USE_PCMP_INSTR) ? "yes" : "no",
(cpuinfo.use_instr & PVR0_USE_DIV_MASK) ? "yes" : "no");
count += seq_printf(m,
" MMU:\t\t%x\n",
cpuinfo.mmu);
seq_printf(m, " MMU:\t\t%x\n", cpuinfo.mmu);
count += seq_printf(m,
" MUL:\t\t%s\n"
" FPU:\t\t%s\n",
(cpuinfo.use_mult & PVR2_USE_MUL64_MASK) ? "v2" :
(cpuinfo.use_mult & PVR0_USE_HW_MUL_MASK) ? "v1" : "no",
(cpuinfo.use_fpu & PVR2_USE_FPU2_MASK) ? "v2" :
(cpuinfo.use_fpu & PVR0_USE_FPU_MASK) ? "v1" : "no");
seq_printf(m,
" MUL:\t\t%s\n"
" FPU:\t\t%s\n",
(cpuinfo.use_mult & PVR2_USE_MUL64_MASK) ? "v2" :
(cpuinfo.use_mult & PVR0_USE_HW_MUL_MASK) ? "v1" : "no",
(cpuinfo.use_fpu & PVR2_USE_FPU2_MASK) ? "v2" :
(cpuinfo.use_fpu & PVR0_USE_FPU_MASK) ? "v1" : "no");
count += seq_printf(m,
" Exc:\t\t%s%s%s%s%s%s%s%s\n",
(cpuinfo.use_exc & PVR2_OPCODE_0x0_ILL_MASK) ? "op0x0 " : "",
(cpuinfo.use_exc & PVR2_UNALIGNED_EXC_MASK) ? "unal " : "",
(cpuinfo.use_exc & PVR2_ILL_OPCODE_EXC_MASK) ? "ill " : "",
(cpuinfo.use_exc & PVR2_IOPB_BUS_EXC_MASK) ? "iopb " : "",
(cpuinfo.use_exc & PVR2_DOPB_BUS_EXC_MASK) ? "dopb " : "",
(cpuinfo.use_exc & PVR2_DIV_ZERO_EXC_MASK) ? "zero " : "",
(cpuinfo.use_exc & PVR2_FPU_EXC_MASK) ? "fpu " : "",
(cpuinfo.use_exc & PVR2_USE_FSL_EXC) ? "fsl " : "");
seq_printf(m,
" Exc:\t\t%s%s%s%s%s%s%s%s\n",
(cpuinfo.use_exc & PVR2_OPCODE_0x0_ILL_MASK) ? "op0x0 " : "",
(cpuinfo.use_exc & PVR2_UNALIGNED_EXC_MASK) ? "unal " : "",
(cpuinfo.use_exc & PVR2_ILL_OPCODE_EXC_MASK) ? "ill " : "",
(cpuinfo.use_exc & PVR2_IOPB_BUS_EXC_MASK) ? "iopb " : "",
(cpuinfo.use_exc & PVR2_DOPB_BUS_EXC_MASK) ? "dopb " : "",
(cpuinfo.use_exc & PVR2_DIV_ZERO_EXC_MASK) ? "zero " : "",
(cpuinfo.use_exc & PVR2_FPU_EXC_MASK) ? "fpu " : "",
(cpuinfo.use_exc & PVR2_USE_FSL_EXC) ? "fsl " : "");
count += seq_printf(m,
"Stream-insns:\t%sprivileged\n",
cpuinfo.mmu_privins ? "un" : "");
seq_printf(m,
"Stream-insns:\t%sprivileged\n",
cpuinfo.mmu_privins ? "un" : "");
if (cpuinfo.use_icache)
count += seq_printf(m,
"Icache:\t\t%ukB\tline length:\t%dB\n",
cpuinfo.icache_size >> 10,
cpuinfo.icache_line_length);
seq_printf(m,
"Icache:\t\t%ukB\tline length:\t%dB\n",
cpuinfo.icache_size >> 10,
cpuinfo.icache_line_length);
else
count += seq_printf(m, "Icache:\t\tno\n");
seq_puts(m, "Icache:\t\tno\n");
if (cpuinfo.use_dcache) {
count += seq_printf(m,
"Dcache:\t\t%ukB\tline length:\t%dB\n",
cpuinfo.dcache_size >> 10,
cpuinfo.dcache_line_length);
seq_printf(m, "Dcache-Policy:\t");
seq_printf(m,
"Dcache:\t\t%ukB\tline length:\t%dB\n",
cpuinfo.dcache_size >> 10,
cpuinfo.dcache_line_length);
seq_puts(m, "Dcache-Policy:\t");
if (cpuinfo.dcache_wb)
count += seq_printf(m, "write-back\n");
seq_puts(m, "write-back\n");
else
count += seq_printf(m, "write-through\n");
} else
count += seq_printf(m, "Dcache:\t\tno\n");
seq_puts(m, "write-through\n");
} else {
seq_puts(m, "Dcache:\t\tno\n");
}
count += seq_printf(m,
"HW-Debug:\t%s\n",
cpuinfo.hw_debug ? "yes" : "no");
seq_printf(m,
"HW-Debug:\t%s\n",
cpuinfo.hw_debug ? "yes" : "no");
count += seq_printf(m,
"PVR-USR1:\t%02x\n"
"PVR-USR2:\t%08x\n",
cpuinfo.pvr_user1,
cpuinfo.pvr_user2);
seq_printf(m,
"PVR-USR1:\t%02x\n"
"PVR-USR2:\t%08x\n",
cpuinfo.pvr_user1,
cpuinfo.pvr_user2);
seq_printf(m, "Page size:\t%lu\n", PAGE_SIZE);
count += seq_printf(m, "Page size:\t%lu\n", PAGE_SIZE);
return 0;
}

View file

@ -126,47 +126,46 @@ void __init setup_cpuinfo(void)
*/
static int show_cpuinfo(struct seq_file *m, void *v)
{
int count = 0;
const u32 clockfreq = cpuinfo.cpu_clock_freq;
count = seq_printf(m,
"CPU:\t\tNios II/%s\n"
"MMU:\t\t%s\n"
"FPU:\t\tnone\n"
"Clocking:\t%u.%02u MHz\n"
"BogoMips:\t%lu.%02lu\n"
"Calibration:\t%lu loops\n",
cpuinfo.cpu_impl,
cpuinfo.mmu ? "present" : "none",
clockfreq / 1000000, (clockfreq / 100000) % 10,
(loops_per_jiffy * HZ) / 500000,
((loops_per_jiffy * HZ) / 5000) % 100,
(loops_per_jiffy * HZ));
seq_printf(m,
"CPU:\t\tNios II/%s\n"
"MMU:\t\t%s\n"
"FPU:\t\tnone\n"
"Clocking:\t%u.%02u MHz\n"
"BogoMips:\t%lu.%02lu\n"
"Calibration:\t%lu loops\n",
cpuinfo.cpu_impl,
cpuinfo.mmu ? "present" : "none",
clockfreq / 1000000, (clockfreq / 100000) % 10,
(loops_per_jiffy * HZ) / 500000,
((loops_per_jiffy * HZ) / 5000) % 100,
(loops_per_jiffy * HZ));
count += seq_printf(m,
"HW:\n"
" MUL:\t\t%s\n"
" MULX:\t\t%s\n"
" DIV:\t\t%s\n",
cpuinfo.has_mul ? "yes" : "no",
cpuinfo.has_mulx ? "yes" : "no",
cpuinfo.has_div ? "yes" : "no");
seq_printf(m,
"HW:\n"
" MUL:\t\t%s\n"
" MULX:\t\t%s\n"
" DIV:\t\t%s\n",
cpuinfo.has_mul ? "yes" : "no",
cpuinfo.has_mulx ? "yes" : "no",
cpuinfo.has_div ? "yes" : "no");
count += seq_printf(m,
"Icache:\t\t%ukB, line length: %u\n",
cpuinfo.icache_size >> 10,
cpuinfo.icache_line_size);
seq_printf(m,
"Icache:\t\t%ukB, line length: %u\n",
cpuinfo.icache_size >> 10,
cpuinfo.icache_line_size);
count += seq_printf(m,
"Dcache:\t\t%ukB, line length: %u\n",
cpuinfo.dcache_size >> 10,
cpuinfo.dcache_line_size);
seq_printf(m,
"Dcache:\t\t%ukB, line length: %u\n",
cpuinfo.dcache_size >> 10,
cpuinfo.dcache_line_size);
count += seq_printf(m,
"TLB:\t\t%u ways, %u entries, %u PID bits\n",
cpuinfo.tlb_num_ways,
cpuinfo.tlb_num_entries,
cpuinfo.tlb_pid_num_bits);
seq_printf(m,
"TLB:\t\t%u ways, %u entries, %u PID bits\n",
cpuinfo.tlb_num_ways,
cpuinfo.tlb_num_entries,
cpuinfo.tlb_pid_num_bits);
return 0;
}

View file

@ -329,30 +329,32 @@ static int show_cpuinfo(struct seq_file *m, void *v)
version = (vr & SPR_VR_VER) >> 24;
revision = vr & SPR_VR_REV;
return seq_printf(m,
"cpu\t\t: OpenRISC-%x\n"
"revision\t: %d\n"
"frequency\t: %ld\n"
"dcache size\t: %d bytes\n"
"dcache block size\t: %d bytes\n"
"icache size\t: %d bytes\n"
"icache block size\t: %d bytes\n"
"immu\t\t: %d entries, %lu ways\n"
"dmmu\t\t: %d entries, %lu ways\n"
"bogomips\t: %lu.%02lu\n",
version,
revision,
loops_per_jiffy * HZ,
cpuinfo.dcache_size,
cpuinfo.dcache_block_size,
cpuinfo.icache_size,
cpuinfo.icache_block_size,
1 << ((mfspr(SPR_DMMUCFGR) & SPR_DMMUCFGR_NTS) >> 2),
1 + (mfspr(SPR_DMMUCFGR) & SPR_DMMUCFGR_NTW),
1 << ((mfspr(SPR_IMMUCFGR) & SPR_IMMUCFGR_NTS) >> 2),
1 + (mfspr(SPR_IMMUCFGR) & SPR_IMMUCFGR_NTW),
(loops_per_jiffy * HZ) / 500000,
((loops_per_jiffy * HZ) / 5000) % 100);
seq_printf(m,
"cpu\t\t: OpenRISC-%x\n"
"revision\t: %d\n"
"frequency\t: %ld\n"
"dcache size\t: %d bytes\n"
"dcache block size\t: %d bytes\n"
"icache size\t: %d bytes\n"
"icache block size\t: %d bytes\n"
"immu\t\t: %d entries, %lu ways\n"
"dmmu\t\t: %d entries, %lu ways\n"
"bogomips\t: %lu.%02lu\n",
version,
revision,
loops_per_jiffy * HZ,
cpuinfo.dcache_size,
cpuinfo.dcache_block_size,
cpuinfo.icache_size,
cpuinfo.icache_block_size,
1 << ((mfspr(SPR_DMMUCFGR) & SPR_DMMUCFGR_NTS) >> 2),
1 + (mfspr(SPR_DMMUCFGR) & SPR_DMMUCFGR_NTW),
1 << ((mfspr(SPR_IMMUCFGR) & SPR_IMMUCFGR_NTS) >> 2),
1 + (mfspr(SPR_IMMUCFGR) & SPR_IMMUCFGR_NTW),
(loops_per_jiffy * HZ) / 500000,
((loops_per_jiffy * HZ) / 5000) % 100);
return 0;
}
static void *c_start(struct seq_file *m, loff_t * pos)

View file

@ -29,8 +29,9 @@ static int opal_power_control_event(struct notifier_block *nb,
switch (type) {
case SOFT_REBOOT:
/* Fall through. The service processor is responsible for
* bringing the machine back up */
pr_info("OPAL: reboot requested\n");
orderly_reboot();
break;
case SOFT_OFF:
pr_info("OPAL: poweroff requested\n");
orderly_poweroff(true);

View file

@ -328,6 +328,7 @@ config COMPAT
select COMPAT_BINFMT_ELF if BINFMT_ELF
select ARCH_WANT_OLD_COMPAT_IPC
select COMPAT_OLD_SIGACTION
depends on MULTIUSER
help
Select this option if you want to enable your system kernel to
handle system-calls from ELF binaries for 31 bit ESA. This option

View file

@ -45,8 +45,10 @@ static int pci_perf_show(struct seq_file *m, void *v)
if (!zdev)
return 0;
if (!zdev->fmb)
return seq_printf(m, "FMB statistics disabled\n");
if (!zdev->fmb) {
seq_puts(m, "FMB statistics disabled\n");
return 0;
}
/* header */
seq_printf(m, "FMB @ %p\n", zdev->fmb);

View file

@ -404,11 +404,10 @@ static const struct file_operations mtrr_fops = {
static int mtrr_seq_show(struct seq_file *seq, void *offset)
{
char factor;
int i, max, len;
int i, max;
mtrr_type type;
unsigned long base, size;
len = 0;
max = num_var_ranges;
for (i = 0; i < max; i++) {
mtrr_if->get(i, &base, &size, &type);
@ -425,11 +424,10 @@ static int mtrr_seq_show(struct seq_file *seq, void *offset)
size >>= 20 - PAGE_SHIFT;
}
/* Base can be > 32bit */
len += seq_printf(seq, "reg%02i: base=0x%06lx000 "
"(%5luMB), size=%5lu%cB, count=%d: %s\n",
i, base, base >> (20 - PAGE_SHIFT), size,
factor, mtrr_usage_table[i],
mtrr_attrib_to_str(type));
seq_printf(seq, "reg%02i: base=0x%06lx000 (%5luMB), size=%5lu%cB, count=%d: %s\n",
i, base, base >> (20 - PAGE_SHIFT),
size, factor,
mtrr_usage_table[i], mtrr_attrib_to_str(type));
}
return 0;
}

View file

@ -843,7 +843,6 @@ static int print_wakeup_source_stats(struct seq_file *m,
unsigned long active_count;
ktime_t active_time;
ktime_t prevent_sleep_time;
int ret;
spin_lock_irqsave(&ws->lock, flags);
@ -866,17 +865,16 @@ static int print_wakeup_source_stats(struct seq_file *m,
active_time = ktime_set(0, 0);
}
ret = seq_printf(m, "%-12s\t%lu\t\t%lu\t\t%lu\t\t%lu\t\t"
"%lld\t\t%lld\t\t%lld\t\t%lld\t\t%lld\n",
ws->name, active_count, ws->event_count,
ws->wakeup_count, ws->expire_count,
ktime_to_ms(active_time), ktime_to_ms(total_time),
ktime_to_ms(max_time), ktime_to_ms(ws->last_time),
ktime_to_ms(prevent_sleep_time));
seq_printf(m, "%-12s\t%lu\t\t%lu\t\t%lu\t\t%lu\t\t%lld\t\t%lld\t\t%lld\t\t%lld\t\t%lld\n",
ws->name, active_count, ws->event_count,
ws->wakeup_count, ws->expire_count,
ktime_to_ms(active_time), ktime_to_ms(total_time),
ktime_to_ms(max_time), ktime_to_ms(ws->last_time),
ktime_to_ms(prevent_sleep_time));
spin_unlock_irqrestore(&ws->lock, flags);
return ret;
return 0;
}
/**

View file

@ -137,7 +137,7 @@
*/
static bool verbose = 0;
static int verbose;
static int major = PG_MAJOR;
static char *name = PG_NAME;
static int disable = 0;
@ -168,7 +168,7 @@ enum {D_PRT, D_PRO, D_UNI, D_MOD, D_SLV, D_DLY};
#include <asm/uaccess.h>
module_param(verbose, bool, 0644);
module_param(verbose, int, 0644);
module_param(major, int, 0);
module_param(name, charp, 0);
module_param_array(drive0, int, NULL, 0);

View file

@ -43,11 +43,22 @@ static const char *default_compressor = "lzo";
/* Module params (documentation at end) */
static unsigned int num_devices = 1;
static inline void deprecated_attr_warn(const char *name)
{
pr_warn_once("%d (%s) Attribute %s (and others) will be removed. %s\n",
task_pid_nr(current),
current->comm,
name,
"See zram documentation.");
}
#define ZRAM_ATTR_RO(name) \
static ssize_t name##_show(struct device *d, \
struct device_attribute *attr, char *b) \
{ \
struct zram *zram = dev_to_zram(d); \
\
deprecated_attr_warn(__stringify(name)); \
return scnprintf(b, PAGE_SIZE, "%llu\n", \
(u64)atomic64_read(&zram->stats.name)); \
} \
@ -89,6 +100,7 @@ static ssize_t orig_data_size_show(struct device *dev,
{
struct zram *zram = dev_to_zram(dev);
deprecated_attr_warn("orig_data_size");
return scnprintf(buf, PAGE_SIZE, "%llu\n",
(u64)(atomic64_read(&zram->stats.pages_stored)) << PAGE_SHIFT);
}
@ -99,6 +111,7 @@ static ssize_t mem_used_total_show(struct device *dev,
u64 val = 0;
struct zram *zram = dev_to_zram(dev);
deprecated_attr_warn("mem_used_total");
down_read(&zram->init_lock);
if (init_done(zram)) {
struct zram_meta *meta = zram->meta;
@ -128,6 +141,7 @@ static ssize_t mem_limit_show(struct device *dev,
u64 val;
struct zram *zram = dev_to_zram(dev);
deprecated_attr_warn("mem_limit");
down_read(&zram->init_lock);
val = zram->limit_pages;
up_read(&zram->init_lock);
@ -159,6 +173,7 @@ static ssize_t mem_used_max_show(struct device *dev,
u64 val = 0;
struct zram *zram = dev_to_zram(dev);
deprecated_attr_warn("mem_used_max");
down_read(&zram->init_lock);
if (init_done(zram))
val = atomic_long_read(&zram->stats.max_used_pages);
@ -670,8 +685,12 @@ static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec, u32 index,
static int zram_bvec_rw(struct zram *zram, struct bio_vec *bvec, u32 index,
int offset, int rw)
{
unsigned long start_time = jiffies;
int ret;
generic_start_io_acct(rw, bvec->bv_len >> SECTOR_SHIFT,
&zram->disk->part0);
if (rw == READ) {
atomic64_inc(&zram->stats.num_reads);
ret = zram_bvec_read(zram, bvec, index, offset);
@ -680,6 +699,8 @@ static int zram_bvec_rw(struct zram *zram, struct bio_vec *bvec, u32 index,
ret = zram_bvec_write(zram, bvec, index, offset);
}
generic_end_io_acct(rw, &zram->disk->part0, start_time);
if (unlikely(ret)) {
if (rw == READ)
atomic64_inc(&zram->stats.failed_reads);
@ -1027,6 +1048,55 @@ static DEVICE_ATTR_RW(mem_used_max);
static DEVICE_ATTR_RW(max_comp_streams);
static DEVICE_ATTR_RW(comp_algorithm);
static ssize_t io_stat_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct zram *zram = dev_to_zram(dev);
ssize_t ret;
down_read(&zram->init_lock);
ret = scnprintf(buf, PAGE_SIZE,
"%8llu %8llu %8llu %8llu\n",
(u64)atomic64_read(&zram->stats.failed_reads),
(u64)atomic64_read(&zram->stats.failed_writes),
(u64)atomic64_read(&zram->stats.invalid_io),
(u64)atomic64_read(&zram->stats.notify_free));
up_read(&zram->init_lock);
return ret;
}
static ssize_t mm_stat_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct zram *zram = dev_to_zram(dev);
u64 orig_size, mem_used = 0;
long max_used;
ssize_t ret;
down_read(&zram->init_lock);
if (init_done(zram))
mem_used = zs_get_total_pages(zram->meta->mem_pool);
orig_size = atomic64_read(&zram->stats.pages_stored);
max_used = atomic_long_read(&zram->stats.max_used_pages);
ret = scnprintf(buf, PAGE_SIZE,
"%8llu %8llu %8llu %8lu %8ld %8llu %8llu\n",
orig_size << PAGE_SHIFT,
(u64)atomic64_read(&zram->stats.compr_data_size),
mem_used << PAGE_SHIFT,
zram->limit_pages << PAGE_SHIFT,
max_used << PAGE_SHIFT,
(u64)atomic64_read(&zram->stats.zero_pages),
(u64)atomic64_read(&zram->stats.num_migrated));
up_read(&zram->init_lock);
return ret;
}
static DEVICE_ATTR_RO(io_stat);
static DEVICE_ATTR_RO(mm_stat);
ZRAM_ATTR_RO(num_reads);
ZRAM_ATTR_RO(num_writes);
ZRAM_ATTR_RO(failed_reads);
@ -1054,6 +1124,8 @@ static struct attribute *zram_disk_attrs[] = {
&dev_attr_mem_used_max.attr,
&dev_attr_max_comp_streams.attr,
&dev_attr_comp_algorithm.attr,
&dev_attr_io_stat.attr,
&dev_attr_mm_stat.attr,
NULL,
};
@ -1082,6 +1154,7 @@ static int create_device(struct zram *zram, int device_id)
if (!zram->disk) {
pr_warn("Error allocating disk structure for device %d\n",
device_id);
ret = -ENOMEM;
goto out_free_queue;
}

View file

@ -84,6 +84,7 @@ struct zram_stats {
atomic64_t compr_data_size; /* compressed size of pages stored */
atomic64_t num_reads; /* failed + successful */
atomic64_t num_writes; /* --do-- */
atomic64_t num_migrated; /* no. of migrated object */
atomic64_t failed_reads; /* can happen when memory is too low */
atomic64_t failed_writes; /* can happen when memory is too low */
atomic64_t invalid_io; /* non-page-aligned I/O requests */

View file

@ -1021,7 +1021,6 @@ static struct hppa_dma_ops ccio_ops = {
#ifdef CONFIG_PROC_FS
static int ccio_proc_info(struct seq_file *m, void *p)
{
int len = 0;
struct ioc *ioc = ioc_list;
while (ioc != NULL) {
@ -1031,22 +1030,22 @@ static int ccio_proc_info(struct seq_file *m, void *p)
int j;
#endif
len += seq_printf(m, "%s\n", ioc->name);
seq_printf(m, "%s\n", ioc->name);
len += seq_printf(m, "Cujo 2.0 bug : %s\n",
(ioc->cujo20_bug ? "yes" : "no"));
seq_printf(m, "Cujo 2.0 bug : %s\n",
(ioc->cujo20_bug ? "yes" : "no"));
len += seq_printf(m, "IO PDIR size : %d bytes (%d entries)\n",
total_pages * 8, total_pages);
seq_printf(m, "IO PDIR size : %d bytes (%d entries)\n",
total_pages * 8, total_pages);
#ifdef CCIO_COLLECT_STATS
len += seq_printf(m, "IO PDIR entries : %ld free %ld used (%d%%)\n",
total_pages - ioc->used_pages, ioc->used_pages,
(int)(ioc->used_pages * 100 / total_pages));
seq_printf(m, "IO PDIR entries : %ld free %ld used (%d%%)\n",
total_pages - ioc->used_pages, ioc->used_pages,
(int)(ioc->used_pages * 100 / total_pages));
#endif
len += seq_printf(m, "Resource bitmap : %d bytes (%d pages)\n",
ioc->res_size, total_pages);
seq_printf(m, "Resource bitmap : %d bytes (%d pages)\n",
ioc->res_size, total_pages);
#ifdef CCIO_COLLECT_STATS
min = max = ioc->avg_search[0];
@ -1058,26 +1057,26 @@ static int ccio_proc_info(struct seq_file *m, void *p)
min = ioc->avg_search[j];
}
avg /= CCIO_SEARCH_SAMPLE;
len += seq_printf(m, " Bitmap search : %ld/%ld/%ld (min/avg/max CPU Cycles)\n",
min, avg, max);
seq_printf(m, " Bitmap search : %ld/%ld/%ld (min/avg/max CPU Cycles)\n",
min, avg, max);
len += seq_printf(m, "pci_map_single(): %8ld calls %8ld pages (avg %d/1000)\n",
ioc->msingle_calls, ioc->msingle_pages,
(int)((ioc->msingle_pages * 1000)/ioc->msingle_calls));
seq_printf(m, "pci_map_single(): %8ld calls %8ld pages (avg %d/1000)\n",
ioc->msingle_calls, ioc->msingle_pages,
(int)((ioc->msingle_pages * 1000)/ioc->msingle_calls));
/* KLUGE - unmap_sg calls unmap_single for each mapped page */
min = ioc->usingle_calls - ioc->usg_calls;
max = ioc->usingle_pages - ioc->usg_pages;
len += seq_printf(m, "pci_unmap_single: %8ld calls %8ld pages (avg %d/1000)\n",
min, max, (int)((max * 1000)/min));
seq_printf(m, "pci_unmap_single: %8ld calls %8ld pages (avg %d/1000)\n",
min, max, (int)((max * 1000)/min));
len += seq_printf(m, "pci_map_sg() : %8ld calls %8ld pages (avg %d/1000)\n",
ioc->msg_calls, ioc->msg_pages,
(int)((ioc->msg_pages * 1000)/ioc->msg_calls));
seq_printf(m, "pci_map_sg() : %8ld calls %8ld pages (avg %d/1000)\n",
ioc->msg_calls, ioc->msg_pages,
(int)((ioc->msg_pages * 1000)/ioc->msg_calls));
len += seq_printf(m, "pci_unmap_sg() : %8ld calls %8ld pages (avg %d/1000)\n\n\n",
ioc->usg_calls, ioc->usg_pages,
(int)((ioc->usg_pages * 1000)/ioc->usg_calls));
seq_printf(m, "pci_unmap_sg() : %8ld calls %8ld pages (avg %d/1000)\n\n\n",
ioc->usg_calls, ioc->usg_pages,
(int)((ioc->usg_pages * 1000)/ioc->usg_calls));
#endif /* CCIO_COLLECT_STATS */
ioc = ioc->next;
@ -1101,7 +1100,6 @@ static const struct file_operations ccio_proc_info_fops = {
static int ccio_proc_bitmap_info(struct seq_file *m, void *p)
{
int len = 0;
struct ioc *ioc = ioc_list;
while (ioc != NULL) {
@ -1110,11 +1108,11 @@ static int ccio_proc_bitmap_info(struct seq_file *m, void *p)
for (j = 0; j < (ioc->res_size / sizeof(u32)); j++) {
if ((j & 7) == 0)
len += seq_puts(m, "\n ");
len += seq_printf(m, "%08x", *res_ptr);
seq_puts(m, "\n ");
seq_printf(m, "%08x", *res_ptr);
res_ptr++;
}
len += seq_puts(m, "\n\n");
seq_puts(m, "\n\n");
ioc = ioc->next;
break; /* XXX - remove me */
}

View file

@ -1774,37 +1774,35 @@ static int sba_proc_info(struct seq_file *m, void *p)
#ifdef SBA_COLLECT_STATS
unsigned long avg = 0, min, max;
#endif
int i, len = 0;
int i;
len += seq_printf(m, "%s rev %d.%d\n",
sba_dev->name,
(sba_dev->hw_rev & 0x7) + 1,
(sba_dev->hw_rev & 0x18) >> 3
);
len += seq_printf(m, "IO PDIR size : %d bytes (%d entries)\n",
(int) ((ioc->res_size << 3) * sizeof(u64)), /* 8 bits/byte */
total_pages);
seq_printf(m, "%s rev %d.%d\n",
sba_dev->name,
(sba_dev->hw_rev & 0x7) + 1,
(sba_dev->hw_rev & 0x18) >> 3);
seq_printf(m, "IO PDIR size : %d bytes (%d entries)\n",
(int)((ioc->res_size << 3) * sizeof(u64)), /* 8 bits/byte */
total_pages);
len += seq_printf(m, "Resource bitmap : %d bytes (%d pages)\n",
ioc->res_size, ioc->res_size << 3); /* 8 bits per byte */
seq_printf(m, "Resource bitmap : %d bytes (%d pages)\n",
ioc->res_size, ioc->res_size << 3); /* 8 bits per byte */
len += seq_printf(m, "LMMIO_BASE/MASK/ROUTE %08x %08x %08x\n",
READ_REG32(sba_dev->sba_hpa + LMMIO_DIST_BASE),
READ_REG32(sba_dev->sba_hpa + LMMIO_DIST_MASK),
READ_REG32(sba_dev->sba_hpa + LMMIO_DIST_ROUTE)
);
seq_printf(m, "LMMIO_BASE/MASK/ROUTE %08x %08x %08x\n",
READ_REG32(sba_dev->sba_hpa + LMMIO_DIST_BASE),
READ_REG32(sba_dev->sba_hpa + LMMIO_DIST_MASK),
READ_REG32(sba_dev->sba_hpa + LMMIO_DIST_ROUTE));
for (i=0; i<4; i++)
len += seq_printf(m, "DIR%d_BASE/MASK/ROUTE %08x %08x %08x\n", i,
READ_REG32(sba_dev->sba_hpa + LMMIO_DIRECT0_BASE + i*0x18),
READ_REG32(sba_dev->sba_hpa + LMMIO_DIRECT0_MASK + i*0x18),
READ_REG32(sba_dev->sba_hpa + LMMIO_DIRECT0_ROUTE + i*0x18)
);
seq_printf(m, "DIR%d_BASE/MASK/ROUTE %08x %08x %08x\n",
i,
READ_REG32(sba_dev->sba_hpa + LMMIO_DIRECT0_BASE + i*0x18),
READ_REG32(sba_dev->sba_hpa + LMMIO_DIRECT0_MASK + i*0x18),
READ_REG32(sba_dev->sba_hpa + LMMIO_DIRECT0_ROUTE + i*0x18));
#ifdef SBA_COLLECT_STATS
len += seq_printf(m, "IO PDIR entries : %ld free %ld used (%d%%)\n",
total_pages - ioc->used_pages, ioc->used_pages,
(int) (ioc->used_pages * 100 / total_pages));
seq_printf(m, "IO PDIR entries : %ld free %ld used (%d%%)\n",
total_pages - ioc->used_pages, ioc->used_pages,
(int)(ioc->used_pages * 100 / total_pages));
min = max = ioc->avg_search[0];
for (i = 0; i < SBA_SEARCH_SAMPLE; i++) {
@ -1813,26 +1811,26 @@ static int sba_proc_info(struct seq_file *m, void *p)
if (ioc->avg_search[i] < min) min = ioc->avg_search[i];
}
avg /= SBA_SEARCH_SAMPLE;
len += seq_printf(m, " Bitmap search : %ld/%ld/%ld (min/avg/max CPU Cycles)\n",
min, avg, max);
seq_printf(m, " Bitmap search : %ld/%ld/%ld (min/avg/max CPU Cycles)\n",
min, avg, max);
len += seq_printf(m, "pci_map_single(): %12ld calls %12ld pages (avg %d/1000)\n",
ioc->msingle_calls, ioc->msingle_pages,
(int) ((ioc->msingle_pages * 1000)/ioc->msingle_calls));
seq_printf(m, "pci_map_single(): %12ld calls %12ld pages (avg %d/1000)\n",
ioc->msingle_calls, ioc->msingle_pages,
(int)((ioc->msingle_pages * 1000)/ioc->msingle_calls));
/* KLUGE - unmap_sg calls unmap_single for each mapped page */
min = ioc->usingle_calls;
max = ioc->usingle_pages - ioc->usg_pages;
len += seq_printf(m, "pci_unmap_single: %12ld calls %12ld pages (avg %d/1000)\n",
min, max, (int) ((max * 1000)/min));
seq_printf(m, "pci_unmap_single: %12ld calls %12ld pages (avg %d/1000)\n",
min, max, (int)((max * 1000)/min));
len += seq_printf(m, "pci_map_sg() : %12ld calls %12ld pages (avg %d/1000)\n",
ioc->msg_calls, ioc->msg_pages,
(int) ((ioc->msg_pages * 1000)/ioc->msg_calls));
seq_printf(m, "pci_map_sg() : %12ld calls %12ld pages (avg %d/1000)\n",
ioc->msg_calls, ioc->msg_pages,
(int)((ioc->msg_pages * 1000)/ioc->msg_calls));
len += seq_printf(m, "pci_unmap_sg() : %12ld calls %12ld pages (avg %d/1000)\n",
ioc->usg_calls, ioc->usg_pages,
(int) ((ioc->usg_pages * 1000)/ioc->usg_calls));
seq_printf(m, "pci_unmap_sg() : %12ld calls %12ld pages (avg %d/1000)\n",
ioc->usg_calls, ioc->usg_pages,
(int)((ioc->usg_pages * 1000)/ioc->usg_calls));
#endif
return 0;
@ -1858,14 +1856,14 @@ sba_proc_bitmap_info(struct seq_file *m, void *p)
struct sba_device *sba_dev = sba_list;
struct ioc *ioc = &sba_dev->ioc[0]; /* FIXME: Multi-IOC support! */
unsigned int *res_ptr = (unsigned int *)ioc->res_map;
int i, len = 0;
int i;
for (i = 0; i < (ioc->res_size/sizeof(unsigned int)); ++i, ++res_ptr) {
if ((i & 7) == 0)
len += seq_printf(m, "\n ");
len += seq_printf(m, " %08x", *res_ptr);
seq_puts(m, "\n ");
seq_printf(m, " %08x", *res_ptr);
}
len += seq_printf(m, "\n");
seq_putc(m, '\n');
return 0;
}

View file

@ -459,23 +459,25 @@ static int cmos_procfs(struct device *dev, struct seq_file *seq)
/* NOTE: at least ICH6 reports battery status using a different
* (non-RTC) bit; and SQWE is ignored on many current systems.
*/
return seq_printf(seq,
"periodic_IRQ\t: %s\n"
"update_IRQ\t: %s\n"
"HPET_emulated\t: %s\n"
// "square_wave\t: %s\n"
"BCD\t\t: %s\n"
"DST_enable\t: %s\n"
"periodic_freq\t: %d\n"
"batt_status\t: %s\n",
(rtc_control & RTC_PIE) ? "yes" : "no",
(rtc_control & RTC_UIE) ? "yes" : "no",
is_hpet_enabled() ? "yes" : "no",
// (rtc_control & RTC_SQWE) ? "yes" : "no",
(rtc_control & RTC_DM_BINARY) ? "no" : "yes",
(rtc_control & RTC_DST_EN) ? "yes" : "no",
cmos->rtc->irq_freq,
(valid & RTC_VRT) ? "okay" : "dead");
seq_printf(seq,
"periodic_IRQ\t: %s\n"
"update_IRQ\t: %s\n"
"HPET_emulated\t: %s\n"
// "square_wave\t: %s\n"
"BCD\t\t: %s\n"
"DST_enable\t: %s\n"
"periodic_freq\t: %d\n"
"batt_status\t: %s\n",
(rtc_control & RTC_PIE) ? "yes" : "no",
(rtc_control & RTC_UIE) ? "yes" : "no",
is_hpet_enabled() ? "yes" : "no",
// (rtc_control & RTC_SQWE) ? "yes" : "no",
(rtc_control & RTC_DM_BINARY) ? "no" : "yes",
(rtc_control & RTC_DST_EN) ? "yes" : "no",
cmos->rtc->irq_freq,
(valid & RTC_VRT) ? "okay" : "dead");
return 0;
}
#else

View file

@ -434,9 +434,9 @@ static int ds1305_proc(struct device *dev, struct seq_file *seq)
}
done:
return seq_printf(seq,
"trickle_charge\t: %s%s\n",
diodes, resistors);
seq_printf(seq, "trickle_charge\t: %s%s\n", diodes, resistors);
return 0;
}
#else

View file

@ -277,13 +277,15 @@ static int mrst_procfs(struct device *dev, struct seq_file *seq)
valid = vrtc_cmos_read(RTC_VALID);
spin_unlock_irq(&rtc_lock);
return seq_printf(seq,
"periodic_IRQ\t: %s\n"
"alarm\t\t: %s\n"
"BCD\t\t: no\n"
"periodic_freq\t: daily (not adjustable)\n",
(rtc_control & RTC_PIE) ? "on" : "off",
(rtc_control & RTC_AIE) ? "on" : "off");
seq_printf(seq,
"periodic_IRQ\t: %s\n"
"alarm\t\t: %s\n"
"BCD\t\t: no\n"
"periodic_freq\t: daily (not adjustable)\n",
(rtc_control & RTC_PIE) ? "on" : "off",
(rtc_control & RTC_AIE) ? "on" : "off");
return 0;
}
#else

View file

@ -261,7 +261,9 @@ static int tegra_rtc_proc(struct device *dev, struct seq_file *seq)
if (!dev || !dev->driver)
return 0;
return seq_printf(seq, "name\t\t: %s\n", dev_name(dev));
seq_printf(seq, "name\t\t: %s\n", dev_name(dev));
return 0;
}
static irqreturn_t tegra_rtc_irq_handler(int irq, void *data)

View file

@ -330,18 +330,20 @@ cio_ignore_proc_seq_show(struct seq_file *s, void *it)
if (!iter->in_range) {
/* First device in range. */
if ((iter->devno == __MAX_SUBCHANNEL) ||
!is_blacklisted(iter->ssid, iter->devno + 1))
!is_blacklisted(iter->ssid, iter->devno + 1)) {
/* Singular device. */
return seq_printf(s, "0.%x.%04x\n",
iter->ssid, iter->devno);
seq_printf(s, "0.%x.%04x\n", iter->ssid, iter->devno);
return 0;
}
iter->in_range = 1;
return seq_printf(s, "0.%x.%04x-", iter->ssid, iter->devno);
seq_printf(s, "0.%x.%04x-", iter->ssid, iter->devno);
return 0;
}
if ((iter->devno == __MAX_SUBCHANNEL) ||
!is_blacklisted(iter->ssid, iter->devno + 1)) {
/* Last device in range. */
iter->in_range = 0;
return seq_printf(s, "0.%x.%04x\n", iter->ssid, iter->devno);
seq_printf(s, "0.%x.%04x\n", iter->ssid, iter->devno);
}
return 0;
}

View file

@ -160,8 +160,7 @@ static void do_envctrl_shutdown(struct bbc_cpu_temperature *tp)
printk(KERN_CRIT "kenvctrld: Shutting down the system now.\n");
shutting_down = 1;
if (orderly_poweroff(true) < 0)
printk(KERN_CRIT "envctrl: shutdown execution failed\n");
orderly_poweroff(true);
}
#define WARN_INTERVAL (30 * HZ)

View file

@ -970,18 +970,13 @@ static struct i2c_child_t *envctrl_get_i2c_child(unsigned char mon_type)
static void envctrl_do_shutdown(void)
{
static int inprog = 0;
int ret;
if (inprog != 0)
return;
inprog = 1;
printk(KERN_CRIT "kenvctrld: WARNING: Shutting down the system now.\n");
ret = orderly_poweroff(true);
if (ret < 0) {
printk(KERN_CRIT "kenvctrld: WARNING: system shutdown failed!\n");
inprog = 0; /* unlikely to succeed, but we could try again */
}
orderly_poweroff(true);
}
static struct task_struct *kenvctrld_task;

View file

@ -10,6 +10,7 @@ config LUSTRE_FS
select CRYPTO_SHA1
select CRYPTO_SHA256
select CRYPTO_SHA512
depends on MULTIUSER
help
This option enables Lustre file system client support. Choose Y
here if you want to access a Lustre file system cluster. To compile

View file

@ -463,6 +463,23 @@ int dax_fault(struct vm_area_struct *vma, struct vm_fault *vmf,
}
EXPORT_SYMBOL_GPL(dax_fault);
/**
* dax_pfn_mkwrite - handle first write to DAX page
* @vma: The virtual memory area where the fault occurred
* @vmf: The description of the fault
*
*/
int dax_pfn_mkwrite(struct vm_area_struct *vma, struct vm_fault *vmf)
{
struct super_block *sb = file_inode(vma->vm_file)->i_sb;
sb_start_pagefault(sb);
file_update_time(vma->vm_file);
sb_end_pagefault(sb);
return VM_FAULT_NOPAGE;
}
EXPORT_SYMBOL_GPL(dax_pfn_mkwrite);
/**
* dax_zero_page_range - zero a range within a page of a DAX file
* @inode: The file being truncated

View file

@ -793,7 +793,6 @@ extern int ext2_fsync(struct file *file, loff_t start, loff_t end,
int datasync);
extern const struct inode_operations ext2_file_inode_operations;
extern const struct file_operations ext2_file_operations;
extern const struct file_operations ext2_dax_file_operations;
/* inode.c */
extern const struct address_space_operations ext2_aops;

View file

@ -39,6 +39,7 @@ static int ext2_dax_mkwrite(struct vm_area_struct *vma, struct vm_fault *vmf)
static const struct vm_operations_struct ext2_dax_vm_ops = {
.fault = ext2_dax_fault,
.page_mkwrite = ext2_dax_mkwrite,
.pfn_mkwrite = dax_pfn_mkwrite,
};
static int ext2_file_mmap(struct file *file, struct vm_area_struct *vma)
@ -106,22 +107,6 @@ const struct file_operations ext2_file_operations = {
.splice_write = iter_file_splice_write,
};
#ifdef CONFIG_FS_DAX
const struct file_operations ext2_dax_file_operations = {
.llseek = generic_file_llseek,
.read_iter = generic_file_read_iter,
.write_iter = generic_file_write_iter,
.unlocked_ioctl = ext2_ioctl,
#ifdef CONFIG_COMPAT
.compat_ioctl = ext2_compat_ioctl,
#endif
.mmap = ext2_file_mmap,
.open = dquot_file_open,
.release = ext2_release_file,
.fsync = ext2_fsync,
};
#endif
const struct inode_operations ext2_file_inode_operations = {
#ifdef CONFIG_EXT2_FS_XATTR
.setxattr = generic_setxattr,

View file

@ -1388,10 +1388,7 @@ struct inode *ext2_iget (struct super_block *sb, unsigned long ino)
if (S_ISREG(inode->i_mode)) {
inode->i_op = &ext2_file_inode_operations;
if (test_opt(inode->i_sb, DAX)) {
inode->i_mapping->a_ops = &ext2_aops;
inode->i_fop = &ext2_dax_file_operations;
} else if (test_opt(inode->i_sb, NOBH)) {
if (test_opt(inode->i_sb, NOBH)) {
inode->i_mapping->a_ops = &ext2_nobh_aops;
inode->i_fop = &ext2_file_operations;
} else {

View file

@ -104,10 +104,7 @@ static int ext2_create (struct inode * dir, struct dentry * dentry, umode_t mode
return PTR_ERR(inode);
inode->i_op = &ext2_file_inode_operations;
if (test_opt(inode->i_sb, DAX)) {
inode->i_mapping->a_ops = &ext2_aops;
inode->i_fop = &ext2_dax_file_operations;
} else if (test_opt(inode->i_sb, NOBH)) {
if (test_opt(inode->i_sb, NOBH)) {
inode->i_mapping->a_ops = &ext2_nobh_aops;
inode->i_fop = &ext2_file_operations;
} else {
@ -125,10 +122,7 @@ static int ext2_tmpfile(struct inode *dir, struct dentry *dentry, umode_t mode)
return PTR_ERR(inode);
inode->i_op = &ext2_file_inode_operations;
if (test_opt(inode->i_sb, DAX)) {
inode->i_mapping->a_ops = &ext2_aops;
inode->i_fop = &ext2_dax_file_operations;
} else if (test_opt(inode->i_sb, NOBH)) {
if (test_opt(inode->i_sb, NOBH)) {
inode->i_mapping->a_ops = &ext2_nobh_aops;
inode->i_fop = &ext2_file_operations;
} else {

View file

@ -2593,7 +2593,6 @@ extern const struct file_operations ext4_dir_operations;
/* file.c */
extern const struct inode_operations ext4_file_inode_operations;
extern const struct file_operations ext4_file_operations;
extern const struct file_operations ext4_dax_file_operations;
extern loff_t ext4_llseek(struct file *file, loff_t offset, int origin);
/* inline.c */

View file

@ -206,6 +206,7 @@ static int ext4_dax_mkwrite(struct vm_area_struct *vma, struct vm_fault *vmf)
static const struct vm_operations_struct ext4_dax_vm_ops = {
.fault = ext4_dax_fault,
.page_mkwrite = ext4_dax_mkwrite,
.pfn_mkwrite = dax_pfn_mkwrite,
};
#else
#define ext4_dax_vm_ops ext4_file_vm_ops
@ -622,24 +623,6 @@ const struct file_operations ext4_file_operations = {
.fallocate = ext4_fallocate,
};
#ifdef CONFIG_FS_DAX
const struct file_operations ext4_dax_file_operations = {
.llseek = ext4_llseek,
.read_iter = generic_file_read_iter,
.write_iter = ext4_file_write_iter,
.unlocked_ioctl = ext4_ioctl,
#ifdef CONFIG_COMPAT
.compat_ioctl = ext4_compat_ioctl,
#endif
.mmap = ext4_file_mmap,
.open = ext4_file_open,
.release = ext4_release_file,
.fsync = ext4_sync_file,
/* Splice not yet supported with DAX */
.fallocate = ext4_fallocate,
};
#endif
const struct inode_operations ext4_file_inode_operations = {
.setattr = ext4_setattr,
.getattr = ext4_getattr,

View file

@ -4090,10 +4090,7 @@ struct inode *ext4_iget(struct super_block *sb, unsigned long ino)
if (S_ISREG(inode->i_mode)) {
inode->i_op = &ext4_file_inode_operations;
if (test_opt(inode->i_sb, DAX))
inode->i_fop = &ext4_dax_file_operations;
else
inode->i_fop = &ext4_file_operations;
inode->i_fop = &ext4_file_operations;
ext4_set_aops(inode);
} else if (S_ISDIR(inode->i_mode)) {
inode->i_op = &ext4_dir_inode_operations;

View file

@ -2235,10 +2235,7 @@ static int ext4_create(struct inode *dir, struct dentry *dentry, umode_t mode,
err = PTR_ERR(inode);
if (!IS_ERR(inode)) {
inode->i_op = &ext4_file_inode_operations;
if (test_opt(inode->i_sb, DAX))
inode->i_fop = &ext4_dax_file_operations;
else
inode->i_fop = &ext4_file_operations;
inode->i_fop = &ext4_file_operations;
ext4_set_aops(inode);
err = ext4_add_nondir(handle, dentry, inode);
if (!err && IS_DIRSYNC(dir))
@ -2302,10 +2299,7 @@ static int ext4_tmpfile(struct inode *dir, struct dentry *dentry, umode_t mode)
err = PTR_ERR(inode);
if (!IS_ERR(inode)) {
inode->i_op = &ext4_file_inode_operations;
if (test_opt(inode->i_sb, DAX))
inode->i_fop = &ext4_dax_file_operations;
else
inode->i_fop = &ext4_file_operations;
inode->i_fop = &ext4_file_operations;
ext4_set_aops(inode);
d_tmpfile(dentry, inode);
err = ext4_orphan_add(handle, inode);

View file

@ -48,9 +48,10 @@ struct hugetlbfs_config {
kuid_t uid;
kgid_t gid;
umode_t mode;
long nr_blocks;
long max_hpages;
long nr_inodes;
struct hstate *hstate;
long min_hpages;
};
struct hugetlbfs_inode_info {
@ -68,7 +69,7 @@ int sysctl_hugetlb_shm_group;
enum {
Opt_size, Opt_nr_inodes,
Opt_mode, Opt_uid, Opt_gid,
Opt_pagesize,
Opt_pagesize, Opt_min_size,
Opt_err,
};
@ -79,6 +80,7 @@ static const match_table_t tokens = {
{Opt_uid, "uid=%u"},
{Opt_gid, "gid=%u"},
{Opt_pagesize, "pagesize=%s"},
{Opt_min_size, "min_size=%s"},
{Opt_err, NULL},
};
@ -729,14 +731,38 @@ static const struct super_operations hugetlbfs_ops = {
.show_options = generic_show_options,
};
enum { NO_SIZE, SIZE_STD, SIZE_PERCENT };
/*
* Convert size option passed from command line to number of huge pages
* in the pool specified by hstate. Size option could be in bytes
* (val_type == SIZE_STD) or percentage of the pool (val_type == SIZE_PERCENT).
*/
static long long
hugetlbfs_size_to_hpages(struct hstate *h, unsigned long long size_opt,
int val_type)
{
if (val_type == NO_SIZE)
return -1;
if (val_type == SIZE_PERCENT) {
size_opt <<= huge_page_shift(h);
size_opt *= h->max_huge_pages;
do_div(size_opt, 100);
}
size_opt >>= huge_page_shift(h);
return size_opt;
}
static int
hugetlbfs_parse_options(char *options, struct hugetlbfs_config *pconfig)
{
char *p, *rest;
substring_t args[MAX_OPT_ARGS];
int option;
unsigned long long size = 0;
enum { NO_SIZE, SIZE_STD, SIZE_PERCENT } setsize = NO_SIZE;
unsigned long long max_size_opt = 0, min_size_opt = 0;
int max_val_type = NO_SIZE, min_val_type = NO_SIZE;
if (!options)
return 0;
@ -774,10 +800,10 @@ hugetlbfs_parse_options(char *options, struct hugetlbfs_config *pconfig)
/* memparse() will accept a K/M/G without a digit */
if (!isdigit(*args[0].from))
goto bad_val;
size = memparse(args[0].from, &rest);
setsize = SIZE_STD;
max_size_opt = memparse(args[0].from, &rest);
max_val_type = SIZE_STD;
if (*rest == '%')
setsize = SIZE_PERCENT;
max_val_type = SIZE_PERCENT;
break;
}
@ -800,6 +826,17 @@ hugetlbfs_parse_options(char *options, struct hugetlbfs_config *pconfig)
break;
}
case Opt_min_size: {
/* memparse() will accept a K/M/G without a digit */
if (!isdigit(*args[0].from))
goto bad_val;
min_size_opt = memparse(args[0].from, &rest);
min_val_type = SIZE_STD;
if (*rest == '%')
min_val_type = SIZE_PERCENT;
break;
}
default:
pr_err("Bad mount option: \"%s\"\n", p);
return -EINVAL;
@ -807,15 +844,22 @@ hugetlbfs_parse_options(char *options, struct hugetlbfs_config *pconfig)
}
}
/* Do size after hstate is set up */
if (setsize > NO_SIZE) {
struct hstate *h = pconfig->hstate;
if (setsize == SIZE_PERCENT) {
size <<= huge_page_shift(h);
size *= h->max_huge_pages;
do_div(size, 100);
}
pconfig->nr_blocks = (size >> huge_page_shift(h));
/*
* Use huge page pool size (in hstate) to convert the size
* options to number of huge pages. If NO_SIZE, -1 is returned.
*/
pconfig->max_hpages = hugetlbfs_size_to_hpages(pconfig->hstate,
max_size_opt, max_val_type);
pconfig->min_hpages = hugetlbfs_size_to_hpages(pconfig->hstate,
min_size_opt, min_val_type);
/*
* If max_size was specified, then min_size must be smaller
*/
if (max_val_type > NO_SIZE &&
pconfig->min_hpages > pconfig->max_hpages) {
pr_err("minimum size can not be greater than maximum size\n");
return -EINVAL;
}
return 0;
@ -834,12 +878,13 @@ hugetlbfs_fill_super(struct super_block *sb, void *data, int silent)
save_mount_options(sb, data);
config.nr_blocks = -1; /* No limit on size by default */
config.max_hpages = -1; /* No limit on size by default */
config.nr_inodes = -1; /* No limit on number of inodes by default */
config.uid = current_fsuid();
config.gid = current_fsgid();
config.mode = 0755;
config.hstate = &default_hstate;
config.min_hpages = -1; /* No default minimum size */
ret = hugetlbfs_parse_options(data, &config);
if (ret)
return ret;
@ -853,8 +898,15 @@ hugetlbfs_fill_super(struct super_block *sb, void *data, int silent)
sbinfo->max_inodes = config.nr_inodes;
sbinfo->free_inodes = config.nr_inodes;
sbinfo->spool = NULL;
if (config.nr_blocks != -1) {
sbinfo->spool = hugepage_new_subpool(config.nr_blocks);
/*
* Allocate and initialize subpool if maximum or minimum size is
* specified. Any needed reservations (for minimim size) are taken
* taken when the subpool is created.
*/
if (config.max_hpages != -1 || config.min_hpages != -1) {
sbinfo->spool = hugepage_new_subpool(config.hstate,
config.max_hpages,
config.min_hpages);
if (!sbinfo->spool)
goto out_free;
}

View file

@ -183,30 +183,23 @@ static inline void remove_metapage(struct page *page, struct metapage *mp)
#endif
static void init_once(void *foo)
{
struct metapage *mp = (struct metapage *)foo;
mp->lid = 0;
mp->lsn = 0;
mp->flag = 0;
mp->data = NULL;
mp->clsn = 0;
mp->log = NULL;
set_bit(META_free, &mp->flag);
init_waitqueue_head(&mp->wait);
}
static inline struct metapage *alloc_metapage(gfp_t gfp_mask)
{
return mempool_alloc(metapage_mempool, gfp_mask);
struct metapage *mp = mempool_alloc(metapage_mempool, gfp_mask);
if (mp) {
mp->lid = 0;
mp->lsn = 0;
mp->data = NULL;
mp->clsn = 0;
mp->log = NULL;
init_waitqueue_head(&mp->wait);
}
return mp;
}
static inline void free_metapage(struct metapage *mp)
{
mp->flag = 0;
set_bit(META_free, &mp->flag);
mempool_free(mp, metapage_mempool);
}
@ -216,7 +209,7 @@ int __init metapage_init(void)
* Allocate the metapage structures
*/
metapage_cache = kmem_cache_create("jfs_mp", sizeof(struct metapage),
0, 0, init_once);
0, 0, NULL);
if (metapage_cache == NULL)
return -ENOMEM;

View file

@ -48,7 +48,6 @@ struct metapage {
/* metapage flag */
#define META_locked 0
#define META_free 1
#define META_dirty 2
#define META_sync 3
#define META_discard 4

View file

@ -1,6 +1,6 @@
config NFS_FS
tristate "NFS client support"
depends on INET && FILE_LOCKING
depends on INET && FILE_LOCKING && MULTIUSER
select LOCKD
select SUNRPC
select NFS_ACL_SUPPORT if NFS_V3_ACL

View file

@ -6,6 +6,7 @@ config NFSD
select SUNRPC
select EXPORTFS
select NFS_ACL_SUPPORT if NFSD_V2_ACL
depends on MULTIUSER
help
Choose Y here if you want to allow other computers to access
files residing on this system using Sun's Network File System

View file

@ -99,8 +99,8 @@ static inline void task_name(struct seq_file *m, struct task_struct *p)
buf = m->buf + m->count;
/* Ignore error for now */
string_escape_str(tcomm, &buf, m->size - m->count,
ESCAPE_SPACE | ESCAPE_SPECIAL, "\n\\");
buf += string_escape_str(tcomm, buf, m->size - m->count,
ESCAPE_SPACE | ESCAPE_SPECIAL, "\n\\");
m->count = buf - m->buf;
seq_putc(m, '\n');
@ -188,6 +188,24 @@ static inline void task_state(struct seq_file *m, struct pid_namespace *ns,
from_kgid_munged(user_ns, GROUP_AT(group_info, g)));
put_cred(cred);
#ifdef CONFIG_PID_NS
seq_puts(m, "\nNStgid:");
for (g = ns->level; g <= pid->level; g++)
seq_printf(m, "\t%d",
task_tgid_nr_ns(p, pid->numbers[g].ns));
seq_puts(m, "\nNSpid:");
for (g = ns->level; g <= pid->level; g++)
seq_printf(m, "\t%d",
task_pid_nr_ns(p, pid->numbers[g].ns));
seq_puts(m, "\nNSpgid:");
for (g = ns->level; g <= pid->level; g++)
seq_printf(m, "\t%d",
task_pgrp_nr_ns(p, pid->numbers[g].ns));
seq_puts(m, "\nNSsid:");
for (g = ns->level; g <= pid->level; g++)
seq_printf(m, "\t%d",
task_session_nr_ns(p, pid->numbers[g].ns));
#endif
seq_putc(m, '\n');
}
@ -614,7 +632,9 @@ static int children_seq_show(struct seq_file *seq, void *v)
pid_t pid;
pid = pid_nr_ns(v, inode->i_sb->s_fs_info);
return seq_printf(seq, "%d ", pid);
seq_printf(seq, "%d ", pid);
return 0;
}
static void *children_seq_start(struct seq_file *seq, loff_t *pos)

View file

@ -238,13 +238,15 @@ static int proc_pid_wchan(struct seq_file *m, struct pid_namespace *ns,
wchan = get_wchan(task);
if (lookup_symbol_name(wchan, symname) < 0)
if (lookup_symbol_name(wchan, symname) < 0) {
if (!ptrace_may_access(task, PTRACE_MODE_READ))
return 0;
else
return seq_printf(m, "%lu", wchan);
else
return seq_printf(m, "%s", symname);
seq_printf(m, "%lu", wchan);
} else {
seq_printf(m, "%s", symname);
}
return 0;
}
#endif /* CONFIG_KALLSYMS */
@ -309,10 +311,12 @@ static int proc_pid_stack(struct seq_file *m, struct pid_namespace *ns,
static int proc_pid_schedstat(struct seq_file *m, struct pid_namespace *ns,
struct pid *pid, struct task_struct *task)
{
return seq_printf(m, "%llu %llu %lu\n",
(unsigned long long)task->se.sum_exec_runtime,
(unsigned long long)task->sched_info.run_delay,
task->sched_info.pcount);
seq_printf(m, "%llu %llu %lu\n",
(unsigned long long)task->se.sum_exec_runtime,
(unsigned long long)task->sched_info.run_delay,
task->sched_info.pcount);
return 0;
}
#endif
@ -387,7 +391,9 @@ static int proc_oom_score(struct seq_file *m, struct pid_namespace *ns,
points = oom_badness(task, NULL, NULL, totalpages) *
1000 / totalpages;
read_unlock(&tasklist_lock);
return seq_printf(m, "%lu\n", points);
seq_printf(m, "%lu\n", points);
return 0;
}
struct limit_names {
@ -432,15 +438,15 @@ static int proc_pid_limits(struct seq_file *m, struct pid_namespace *ns,
* print the file header
*/
seq_printf(m, "%-25s %-20s %-20s %-10s\n",
"Limit", "Soft Limit", "Hard Limit", "Units");
"Limit", "Soft Limit", "Hard Limit", "Units");
for (i = 0; i < RLIM_NLIMITS; i++) {
if (rlim[i].rlim_cur == RLIM_INFINITY)
seq_printf(m, "%-25s %-20s ",
lnames[i].name, "unlimited");
lnames[i].name, "unlimited");
else
seq_printf(m, "%-25s %-20lu ",
lnames[i].name, rlim[i].rlim_cur);
lnames[i].name, rlim[i].rlim_cur);
if (rlim[i].rlim_max == RLIM_INFINITY)
seq_printf(m, "%-20s ", "unlimited");
@ -462,7 +468,9 @@ static int proc_pid_syscall(struct seq_file *m, struct pid_namespace *ns,
{
long nr;
unsigned long args[6], sp, pc;
int res = lock_trace(task);
int res;
res = lock_trace(task);
if (res)
return res;
@ -477,7 +485,8 @@ static int proc_pid_syscall(struct seq_file *m, struct pid_namespace *ns,
args[0], args[1], args[2], args[3], args[4], args[5],
sp, pc);
unlock_trace(task);
return res;
return 0;
}
#endif /* CONFIG_HAVE_ARCH_TRACEHOOK */
@ -2002,12 +2011,13 @@ static int show_timer(struct seq_file *m, void *v)
notify = timer->it_sigev_notify;
seq_printf(m, "ID: %d\n", timer->it_id);
seq_printf(m, "signal: %d/%p\n", timer->sigq->info.si_signo,
timer->sigq->info.si_value.sival_ptr);
seq_printf(m, "signal: %d/%p\n",
timer->sigq->info.si_signo,
timer->sigq->info.si_value.sival_ptr);
seq_printf(m, "notify: %s/%s.%d\n",
nstr[notify & ~SIGEV_THREAD_ID],
(notify & SIGEV_THREAD_ID) ? "tid" : "pid",
pid_nr_ns(timer->it_pid, tp->ns));
nstr[notify & ~SIGEV_THREAD_ID],
(notify & SIGEV_THREAD_ID) ? "tid" : "pid",
pid_nr_ns(timer->it_pid, tp->ns));
seq_printf(m, "ClockID: %d\n", timer->it_clock);
return 0;
@ -2352,21 +2362,23 @@ static int do_io_accounting(struct task_struct *task, struct seq_file *m, int wh
unlock_task_sighand(task, &flags);
}
result = seq_printf(m,
"rchar: %llu\n"
"wchar: %llu\n"
"syscr: %llu\n"
"syscw: %llu\n"
"read_bytes: %llu\n"
"write_bytes: %llu\n"
"cancelled_write_bytes: %llu\n",
(unsigned long long)acct.rchar,
(unsigned long long)acct.wchar,
(unsigned long long)acct.syscr,
(unsigned long long)acct.syscw,
(unsigned long long)acct.read_bytes,
(unsigned long long)acct.write_bytes,
(unsigned long long)acct.cancelled_write_bytes);
seq_printf(m,
"rchar: %llu\n"
"wchar: %llu\n"
"syscr: %llu\n"
"syscw: %llu\n"
"read_bytes: %llu\n"
"write_bytes: %llu\n"
"cancelled_write_bytes: %llu\n",
(unsigned long long)acct.rchar,
(unsigned long long)acct.wchar,
(unsigned long long)acct.syscr,
(unsigned long long)acct.syscw,
(unsigned long long)acct.read_bytes,
(unsigned long long)acct.write_bytes,
(unsigned long long)acct.cancelled_write_bytes);
result = 0;
out_unlock:
mutex_unlock(&task->signal->cred_guard_mutex);
return result;

View file

@ -523,6 +523,9 @@ ssize_t generic_file_splice_read(struct file *in, loff_t *ppos,
loff_t isize, left;
int ret;
if (IS_DAX(in->f_mapping->host))
return default_file_splice_read(in, ppos, pipe, len, flags);
isize = i_size_read(in->f_mapping->host);
if (unlikely(*ppos >= isize))
return 0;

View file

@ -4,44 +4,6 @@
#include <uapi/linux/a.out.h>
#ifndef __ASSEMBLY__
#if defined (M_OLDSUN2)
#else
#endif
#if defined (M_68010)
#else
#endif
#if defined (M_68020)
#else
#endif
#if defined (M_SPARC)
#else
#endif
#if !defined (N_MAGIC)
#endif
#if !defined (N_BADMAG)
#endif
#if !defined (N_TXTOFF)
#endif
#if !defined (N_DATOFF)
#endif
#if !defined (N_TRELOFF)
#endif
#if !defined (N_DRELOFF)
#endif
#if !defined (N_SYMOFF)
#endif
#if !defined (N_STROFF)
#endif
#if !defined (N_TXTADDR)
#endif
#if defined(vax) || defined(hp300) || defined(pyr)
#endif
#ifdef sony
#endif /* Sony. */
#ifdef is68k
#endif
#if defined(m68k) && defined(PORTAR)
#endif
#ifdef linux
#include <asm/page.h>
#if defined(__i386__) || defined(__mc68000__)
@ -51,34 +13,5 @@
#endif
#endif
#endif
#ifndef N_DATADDR
#endif
#if !defined (N_BSSADDR)
#endif
#if !defined (N_NLIST_DECLARED)
#endif /* no N_NLIST_DECLARED. */
#if !defined (N_UNDF)
#endif
#if !defined (N_ABS)
#endif
#if !defined (N_TEXT)
#endif
#if !defined (N_DATA)
#endif
#if !defined (N_BSS)
#endif
#if !defined (N_FN)
#endif
#if !defined (N_EXT)
#endif
#if !defined (N_TYPE)
#endif
#if !defined (N_STAB)
#endif
#if !defined (N_RELOCATION_INFO_DECLARED)
#ifdef NS32K
#else
#endif
#endif /* no N_RELOCATION_INFO_DECLARED. */
#endif /*__ASSEMBLY__ */
#endif /* __A_OUT_GNU_H__ */

View file

@ -172,12 +172,8 @@ extern unsigned int bitmap_ord_to_pos(const unsigned long *bitmap, unsigned int
extern int bitmap_print_to_pagebuf(bool list, char *buf,
const unsigned long *maskp, int nmaskbits);
#define BITMAP_FIRST_WORD_MASK(start) (~0UL << ((start) % BITS_PER_LONG))
#define BITMAP_LAST_WORD_MASK(nbits) \
( \
((nbits) % BITS_PER_LONG) ? \
(1UL<<((nbits) % BITS_PER_LONG))-1 : ~0UL \
)
#define BITMAP_FIRST_WORD_MASK(start) (~0UL << ((start) & (BITS_PER_LONG - 1)))
#define BITMAP_LAST_WORD_MASK(nbits) (~0UL >> (-(nbits) & (BITS_PER_LONG - 1)))
#define small_const_nbits(nbits) \
(__builtin_constant_p(nbits) && (nbits) <= BITS_PER_LONG)

View file

@ -205,6 +205,7 @@ static inline kernel_cap_t cap_raise_nfsd_set(const kernel_cap_t a,
cap_intersect(permitted, __cap_nfsd_set));
}
#ifdef CONFIG_MULTIUSER
extern bool has_capability(struct task_struct *t, int cap);
extern bool has_ns_capability(struct task_struct *t,
struct user_namespace *ns, int cap);
@ -213,6 +214,34 @@ extern bool has_ns_capability_noaudit(struct task_struct *t,
struct user_namespace *ns, int cap);
extern bool capable(int cap);
extern bool ns_capable(struct user_namespace *ns, int cap);
#else
static inline bool has_capability(struct task_struct *t, int cap)
{
return true;
}
static inline bool has_ns_capability(struct task_struct *t,
struct user_namespace *ns, int cap)
{
return true;
}
static inline bool has_capability_noaudit(struct task_struct *t, int cap)
{
return true;
}
static inline bool has_ns_capability_noaudit(struct task_struct *t,
struct user_namespace *ns, int cap)
{
return true;
}
static inline bool capable(int cap)
{
return true;
}
static inline bool ns_capable(struct user_namespace *ns, int cap)
{
return true;
}
#endif /* CONFIG_MULTIUSER */
extern bool capable_wrt_inode_uidgid(const struct inode *inode, int cap);
extern bool file_ns_capable(const struct file *file, struct user_namespace *ns, int cap);

View file

@ -34,6 +34,7 @@ extern int sysctl_compaction_handler(struct ctl_table *table, int write,
extern int sysctl_extfrag_threshold;
extern int sysctl_extfrag_handler(struct ctl_table *table, int write,
void __user *buffer, size_t *length, loff_t *ppos);
extern int sysctl_compact_unevictable_allowed;
extern int fragmentation_index(struct zone *zone, unsigned int order);
extern unsigned long try_to_compact_pages(gfp_t gfp_mask, unsigned int order,

View file

@ -62,9 +62,27 @@ do { \
groups_free(group_info); \
} while (0)
extern struct group_info *groups_alloc(int);
extern struct group_info init_groups;
#ifdef CONFIG_MULTIUSER
extern struct group_info *groups_alloc(int);
extern void groups_free(struct group_info *);
extern int in_group_p(kgid_t);
extern int in_egroup_p(kgid_t);
#else
static inline void groups_free(struct group_info *group_info)
{
}
static inline int in_group_p(kgid_t grp)
{
return 1;
}
static inline int in_egroup_p(kgid_t grp)
{
return 1;
}
#endif
extern int set_current_groups(struct group_info *);
extern void set_groups(struct cred *, struct group_info *);
extern int groups_search(const struct group_info *, kgid_t);
@ -74,9 +92,6 @@ extern bool may_setgroups(void);
#define GROUP_AT(gi, i) \
((gi)->blocks[(i) / NGROUPS_PER_BLOCK][(i) % NGROUPS_PER_BLOCK])
extern int in_group_p(kgid_t);
extern int in_egroup_p(kgid_t);
/*
* The security context of a task
*

View file

@ -2615,6 +2615,7 @@ int dax_clear_blocks(struct inode *, sector_t block, long size);
int dax_zero_page_range(struct inode *, loff_t from, unsigned len, get_block_t);
int dax_truncate_page(struct inode *, loff_t from, get_block_t);
int dax_fault(struct vm_area_struct *, struct vm_fault *, get_block_t);
int dax_pfn_mkwrite(struct vm_area_struct *, struct vm_fault *);
#define dax_mkwrite(vma, vmf, gb) dax_fault(vma, vmf, gb)
#ifdef CONFIG_BLOCK
@ -2679,7 +2680,6 @@ void inode_sub_bytes(struct inode *inode, loff_t bytes);
loff_t inode_get_bytes(struct inode *inode);
void inode_set_bytes(struct inode *inode, loff_t bytes);
extern int vfs_readdir(struct file *, filldir_t, void *);
extern int iterate_dir(struct file *, struct dir_context *);
extern int vfs_stat(const char __user *, struct kstat *);

View file

@ -22,7 +22,13 @@ struct mmu_gather;
struct hugepage_subpool {
spinlock_t lock;
long count;
long max_hpages, used_hpages;
long max_hpages; /* Maximum huge pages or -1 if no maximum. */
long used_hpages; /* Used count against maximum, includes */
/* both alloced and reserved pages. */
struct hstate *hstate;
long min_hpages; /* Minimum huge pages or -1 if no minimum. */
long rsv_hpages; /* Pages reserved against global pool to */
/* sasitfy minimum size. */
};
struct resv_map {
@ -38,11 +44,10 @@ extern int hugetlb_max_hstate __read_mostly;
#define for_each_hstate(h) \
for ((h) = hstates; (h) < &hstates[hugetlb_max_hstate]; (h)++)
struct hugepage_subpool *hugepage_new_subpool(long nr_blocks);
struct hugepage_subpool *hugepage_new_subpool(struct hstate *h, long max_hpages,
long min_hpages);
void hugepage_put_subpool(struct hugepage_subpool *spool);
int PageHuge(struct page *page);
void reset_vma_resv_huge_pages(struct vm_area_struct *vma);
int hugetlb_sysctl_handler(struct ctl_table *, int, void __user *, size_t *, loff_t *);
int hugetlb_overcommit_handler(struct ctl_table *, int, void __user *, size_t *, loff_t *);
@ -79,7 +84,6 @@ void hugetlb_unreserve_pages(struct inode *inode, long offset, long freed);
int dequeue_hwpoisoned_huge_page(struct page *page);
bool isolate_huge_page(struct page *page, struct list_head *list);
void putback_active_hugepage(struct page *page);
bool is_hugepage_active(struct page *page);
void free_huge_page(struct page *page);
#ifdef CONFIG_ARCH_WANT_HUGE_PMD_SHARE
@ -109,11 +113,6 @@ unsigned long hugetlb_change_protection(struct vm_area_struct *vma,
#else /* !CONFIG_HUGETLB_PAGE */
static inline int PageHuge(struct page *page)
{
return 0;
}
static inline void reset_vma_resv_huge_pages(struct vm_area_struct *vma)
{
}
@ -152,7 +151,6 @@ static inline bool isolate_huge_page(struct page *page, struct list_head *list)
return false;
}
#define putback_active_hugepage(p) do {} while (0)
#define is_hugepage_active(x) false
static inline unsigned long hugetlb_change_protection(struct vm_area_struct *vma,
unsigned long address, unsigned long end, pgprot_t newprot)

View file

@ -196,10 +196,8 @@ extern struct resource * __request_region(struct resource *,
/* Compatibility cruft */
#define release_region(start,n) __release_region(&ioport_resource, (start), (n))
#define check_mem_region(start,n) __check_region(&iomem_resource, (start), (n))
#define release_mem_region(start,n) __release_region(&iomem_resource, (start), (n))
extern int __check_region(struct resource *, resource_size_t, resource_size_t);
extern void __release_region(struct resource *, resource_size_t,
resource_size_t);
#ifdef CONFIG_MEMORY_HOTREMOVE
@ -207,12 +205,6 @@ extern int release_mem_region_adjustable(struct resource *, resource_size_t,
resource_size_t);
#endif
static inline int __deprecated check_region(resource_size_t s,
resource_size_t n)
{
return __check_region(&ioport_resource, s, n);
}
/* Wrappers for managed devices */
struct device;

View file

@ -44,6 +44,7 @@ void kasan_poison_object_data(struct kmem_cache *cache, void *object);
void kasan_kmalloc_large(const void *ptr, size_t size);
void kasan_kfree_large(const void *ptr);
void kasan_kfree(void *ptr);
void kasan_kmalloc(struct kmem_cache *s, const void *object, size_t size);
void kasan_krealloc(const void *object, size_t new_size);
@ -71,6 +72,7 @@ static inline void kasan_poison_object_data(struct kmem_cache *cache,
static inline void kasan_kmalloc_large(void *ptr, size_t size) {}
static inline void kasan_kfree_large(const void *ptr) {}
static inline void kasan_kfree(void *ptr) {}
static inline void kasan_kmalloc(struct kmem_cache *s, const void *object,
size_t size) {}
static inline void kasan_krealloc(const void *object, size_t new_size) {}

View file

@ -35,18 +35,6 @@ static inline void ksm_exit(struct mm_struct *mm)
__ksm_exit(mm);
}
/*
* A KSM page is one of those write-protected "shared pages" or "merged pages"
* which KSM maps into multiple mms, wherever identical anonymous page content
* is found in VM_MERGEABLE vmas. It's a PageAnon page, pointing not to any
* anon_vma, but to that page's node of the stable tree.
*/
static inline int PageKsm(struct page *page)
{
return ((unsigned long)page->mapping & PAGE_MAPPING_FLAGS) ==
(PAGE_MAPPING_ANON | PAGE_MAPPING_KSM);
}
static inline struct stable_node *page_stable_node(struct page *page)
{
return PageKsm(page) ? page_rmapping(page) : NULL;
@ -87,11 +75,6 @@ static inline void ksm_exit(struct mm_struct *mm)
{
}
static inline int PageKsm(struct page *page)
{
return 0;
}
#ifdef CONFIG_MMU
static inline int ksm_madvise(struct vm_area_struct *vma, unsigned long start,
unsigned long end, int advice, unsigned long *vm_flags)

View file

@ -36,7 +36,8 @@ extern void mempool_free(void *element, mempool_t *pool);
/*
* A mempool_alloc_t and mempool_free_t that get the memory from
* a slab that is passed in through pool_data.
* a slab cache that is passed in through pool_data.
* Note: the slab cache may not have a ctor function.
*/
void *mempool_alloc_slab(gfp_t gfp_mask, void *pool_data);
void mempool_free_slab(void *element, void *pool_data);

View file

@ -251,6 +251,9 @@ struct vm_operations_struct {
* writable, if an error is returned it will cause a SIGBUS */
int (*page_mkwrite)(struct vm_area_struct *vma, struct vm_fault *vmf);
/* same as page_mkwrite when using VM_PFNMAP|VM_MIXEDMAP */
int (*pfn_mkwrite)(struct vm_area_struct *vma, struct vm_fault *vmf);
/* called by access_process_vm when get_user_pages() fails, typically
* for use by special VMAs that can switch between memory and hardware
*/
@ -494,18 +497,9 @@ static inline int page_count(struct page *page)
return atomic_read(&compound_head(page)->_count);
}
#ifdef CONFIG_HUGETLB_PAGE
extern int PageHeadHuge(struct page *page_head);
#else /* CONFIG_HUGETLB_PAGE */
static inline int PageHeadHuge(struct page *page_head)
{
return 0;
}
#endif /* CONFIG_HUGETLB_PAGE */
static inline bool __compound_tail_refcounted(struct page *page)
{
return !PageSlab(page) && !PageHeadHuge(page);
return PageAnon(page) && !PageSlab(page) && !PageHeadHuge(page);
}
/*
@ -571,53 +565,6 @@ static inline void init_page_count(struct page *page)
atomic_set(&page->_count, 1);
}
/*
* PageBuddy() indicate that the page is free and in the buddy system
* (see mm/page_alloc.c).
*
* PAGE_BUDDY_MAPCOUNT_VALUE must be <= -2 but better not too close to
* -2 so that an underflow of the page_mapcount() won't be mistaken
* for a genuine PAGE_BUDDY_MAPCOUNT_VALUE. -128 can be created very
* efficiently by most CPU architectures.
*/
#define PAGE_BUDDY_MAPCOUNT_VALUE (-128)
static inline int PageBuddy(struct page *page)
{
return atomic_read(&page->_mapcount) == PAGE_BUDDY_MAPCOUNT_VALUE;
}
static inline void __SetPageBuddy(struct page *page)
{
VM_BUG_ON_PAGE(atomic_read(&page->_mapcount) != -1, page);
atomic_set(&page->_mapcount, PAGE_BUDDY_MAPCOUNT_VALUE);
}
static inline void __ClearPageBuddy(struct page *page)
{
VM_BUG_ON_PAGE(!PageBuddy(page), page);
atomic_set(&page->_mapcount, -1);
}
#define PAGE_BALLOON_MAPCOUNT_VALUE (-256)
static inline int PageBalloon(struct page *page)
{
return atomic_read(&page->_mapcount) == PAGE_BALLOON_MAPCOUNT_VALUE;
}
static inline void __SetPageBalloon(struct page *page)
{
VM_BUG_ON_PAGE(atomic_read(&page->_mapcount) != -1, page);
atomic_set(&page->_mapcount, PAGE_BALLOON_MAPCOUNT_VALUE);
}
static inline void __ClearPageBalloon(struct page *page)
{
VM_BUG_ON_PAGE(!PageBalloon(page), page);
atomic_set(&page->_mapcount, -1);
}
void put_page(struct page *page);
void put_pages_list(struct list_head *pages);
@ -1006,34 +953,10 @@ void page_address_init(void);
#define page_address_init() do { } while(0)
#endif
/*
* On an anonymous page mapped into a user virtual memory area,
* page->mapping points to its anon_vma, not to a struct address_space;
* with the PAGE_MAPPING_ANON bit set to distinguish it. See rmap.h.
*
* On an anonymous page in a VM_MERGEABLE area, if CONFIG_KSM is enabled,
* the PAGE_MAPPING_KSM bit may be set along with the PAGE_MAPPING_ANON bit;
* and then page->mapping points, not to an anon_vma, but to a private
* structure which KSM associates with that merged page. See ksm.h.
*
* PAGE_MAPPING_KSM without PAGE_MAPPING_ANON is currently never used.
*
* Please note that, confusingly, "page_mapping" refers to the inode
* address_space which maps the page from disk; whereas "page_mapped"
* refers to user virtual address space into which the page is mapped.
*/
#define PAGE_MAPPING_ANON 1
#define PAGE_MAPPING_KSM 2
#define PAGE_MAPPING_FLAGS (PAGE_MAPPING_ANON | PAGE_MAPPING_KSM)
extern void *page_rmapping(struct page *page);
extern struct anon_vma *page_anon_vma(struct page *page);
extern struct address_space *page_mapping(struct page *page);
/* Neutral page->mapping pointer to address_space or anon_vma or other */
static inline void *page_rmapping(struct page *page)
{
return (void *)((unsigned long)page->mapping & ~PAGE_MAPPING_FLAGS);
}
extern struct address_space *__page_file_mapping(struct page *);
static inline
@ -1045,11 +968,6 @@ struct address_space *page_file_mapping(struct page *page)
return page->mapping;
}
static inline int PageAnon(struct page *page)
{
return ((unsigned long)page->mapping & PAGE_MAPPING_ANON) != 0;
}
/*
* Return the pagecache index of the passed page. Regular pagecache pages
* use ->index whereas swapcache pages use ->private
@ -1975,10 +1893,10 @@ extern unsigned long unmapped_area_topdown(struct vm_unmapped_area_info *info);
static inline unsigned long
vm_unmapped_area(struct vm_unmapped_area_info *info)
{
if (!(info->flags & VM_UNMAPPED_AREA_TOPDOWN))
return unmapped_area(info);
else
if (info->flags & VM_UNMAPPED_AREA_TOPDOWN)
return unmapped_area_topdown(info);
else
return unmapped_area(info);
}
/* truncate.c */

View file

@ -842,16 +842,16 @@ static inline int populated_zone(struct zone *zone)
extern int movable_zone;
#ifdef CONFIG_HIGHMEM
static inline int zone_movable_is_highmem(void)
{
#if defined(CONFIG_HIGHMEM) && defined(CONFIG_HAVE_MEMBLOCK_NODE_MAP)
#ifdef CONFIG_HAVE_MEMBLOCK_NODE_MAP
return movable_zone == ZONE_HIGHMEM;
#elif defined(CONFIG_HIGHMEM)
return (ZONE_MOVABLE - 1) == ZONE_HIGHMEM;
#else
return 0;
return (ZONE_MOVABLE - 1) == ZONE_HIGHMEM;
#endif
}
#endif
static inline int is_highmem_idx(enum zone_type idx)
{

View file

@ -289,6 +289,47 @@ PAGEFLAG_FALSE(HWPoison)
#define __PG_HWPOISON 0
#endif
/*
* On an anonymous page mapped into a user virtual memory area,
* page->mapping points to its anon_vma, not to a struct address_space;
* with the PAGE_MAPPING_ANON bit set to distinguish it. See rmap.h.
*
* On an anonymous page in a VM_MERGEABLE area, if CONFIG_KSM is enabled,
* the PAGE_MAPPING_KSM bit may be set along with the PAGE_MAPPING_ANON bit;
* and then page->mapping points, not to an anon_vma, but to a private
* structure which KSM associates with that merged page. See ksm.h.
*
* PAGE_MAPPING_KSM without PAGE_MAPPING_ANON is currently never used.
*
* Please note that, confusingly, "page_mapping" refers to the inode
* address_space which maps the page from disk; whereas "page_mapped"
* refers to user virtual address space into which the page is mapped.
*/
#define PAGE_MAPPING_ANON 1
#define PAGE_MAPPING_KSM 2
#define PAGE_MAPPING_FLAGS (PAGE_MAPPING_ANON | PAGE_MAPPING_KSM)
static inline int PageAnon(struct page *page)
{
return ((unsigned long)page->mapping & PAGE_MAPPING_ANON) != 0;
}
#ifdef CONFIG_KSM
/*
* A KSM page is one of those write-protected "shared pages" or "merged pages"
* which KSM maps into multiple mms, wherever identical anonymous page content
* is found in VM_MERGEABLE vmas. It's a PageAnon page, pointing not to any
* anon_vma, but to that page's node of the stable tree.
*/
static inline int PageKsm(struct page *page)
{
return ((unsigned long)page->mapping & PAGE_MAPPING_FLAGS) ==
(PAGE_MAPPING_ANON | PAGE_MAPPING_KSM);
}
#else
TESTPAGEFLAG_FALSE(Ksm)
#endif
u64 stable_page_flags(struct page *page);
static inline int PageUptodate(struct page *page)
@ -426,6 +467,21 @@ static inline void ClearPageCompound(struct page *page)
#endif /* !PAGEFLAGS_EXTENDED */
#ifdef CONFIG_HUGETLB_PAGE
int PageHuge(struct page *page);
int PageHeadHuge(struct page *page);
bool page_huge_active(struct page *page);
#else
TESTPAGEFLAG_FALSE(Huge)
TESTPAGEFLAG_FALSE(HeadHuge)
static inline bool page_huge_active(struct page *page)
{
return 0;
}
#endif
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
/*
* PageHuge() only returns true for hugetlbfs pages, but not for
@ -479,6 +535,53 @@ static inline int PageTransTail(struct page *page)
}
#endif
/*
* PageBuddy() indicate that the page is free and in the buddy system
* (see mm/page_alloc.c).
*
* PAGE_BUDDY_MAPCOUNT_VALUE must be <= -2 but better not too close to
* -2 so that an underflow of the page_mapcount() won't be mistaken
* for a genuine PAGE_BUDDY_MAPCOUNT_VALUE. -128 can be created very
* efficiently by most CPU architectures.
*/
#define PAGE_BUDDY_MAPCOUNT_VALUE (-128)
static inline int PageBuddy(struct page *page)
{
return atomic_read(&page->_mapcount) == PAGE_BUDDY_MAPCOUNT_VALUE;
}
static inline void __SetPageBuddy(struct page *page)
{
VM_BUG_ON_PAGE(atomic_read(&page->_mapcount) != -1, page);
atomic_set(&page->_mapcount, PAGE_BUDDY_MAPCOUNT_VALUE);
}
static inline void __ClearPageBuddy(struct page *page)
{
VM_BUG_ON_PAGE(!PageBuddy(page), page);
atomic_set(&page->_mapcount, -1);
}
#define PAGE_BALLOON_MAPCOUNT_VALUE (-256)
static inline int PageBalloon(struct page *page)
{
return atomic_read(&page->_mapcount) == PAGE_BALLOON_MAPCOUNT_VALUE;
}
static inline void __SetPageBalloon(struct page *page)
{
VM_BUG_ON_PAGE(atomic_read(&page->_mapcount) != -1, page);
atomic_set(&page->_mapcount, PAGE_BALLOON_MAPCOUNT_VALUE);
}
static inline void __ClearPageBalloon(struct page *page)
{
VM_BUG_ON_PAGE(!PageBalloon(page), page);
atomic_set(&page->_mapcount, -1);
}
/*
* If network-based swap is enabled, sl*b must keep track of whether pages
* were allocated from pfmemalloc reserves.

View file

@ -255,6 +255,11 @@ extern asmlinkage void dump_stack(void) __cold;
printk(KERN_NOTICE pr_fmt(fmt), ##__VA_ARGS__)
#define pr_info(fmt, ...) \
printk(KERN_INFO pr_fmt(fmt), ##__VA_ARGS__)
/*
* Like KERN_CONT, pr_cont() should only be used when continuing
* a line with no newline ('\n') enclosed. Otherwise it defaults
* back to KERN_DEFAULT.
*/
#define pr_cont(fmt, ...) \
printk(KERN_CONT fmt, ##__VA_ARGS__)

View file

@ -70,7 +70,8 @@ void ctrl_alt_del(void);
#define POWEROFF_CMD_PATH_LEN 256
extern char poweroff_cmd[POWEROFF_CMD_PATH_LEN];
extern int orderly_poweroff(bool force);
extern void orderly_poweroff(bool force);
extern void orderly_reboot(void);
/*
* Emergency restart, callable from an interrupt handler.

View file

@ -105,14 +105,6 @@ static inline void put_anon_vma(struct anon_vma *anon_vma)
__put_anon_vma(anon_vma);
}
static inline struct anon_vma *page_anon_vma(struct page *page)
{
if (((unsigned long)page->mapping & PAGE_MAPPING_FLAGS) !=
PAGE_MAPPING_ANON)
return NULL;
return page_rmapping(page);
}
static inline void vma_lock_anon_vma(struct vm_area_struct *vma)
{
struct anon_vma *anon_vma = vma->anon_vma;

View file

@ -47,22 +47,22 @@ static inline int string_unescape_any_inplace(char *buf)
#define ESCAPE_ANY_NP (ESCAPE_ANY | ESCAPE_NP)
#define ESCAPE_HEX 0x20
int string_escape_mem(const char *src, size_t isz, char **dst, size_t osz,
int string_escape_mem(const char *src, size_t isz, char *dst, size_t osz,
unsigned int flags, const char *esc);
static inline int string_escape_mem_any_np(const char *src, size_t isz,
char **dst, size_t osz, const char *esc)
char *dst, size_t osz, const char *esc)
{
return string_escape_mem(src, isz, dst, osz, ESCAPE_ANY_NP, esc);
}
static inline int string_escape_str(const char *src, char **dst, size_t sz,
static inline int string_escape_str(const char *src, char *dst, size_t sz,
unsigned int flags, const char *esc)
{
return string_escape_mem(src, strlen(src), dst, sz, flags, esc);
}
static inline int string_escape_str_any_np(const char *src, char **dst,
static inline int string_escape_str_any_np(const char *src, char *dst,
size_t sz, const char *esc)
{
return string_escape_str(src, dst, sz, ESCAPE_ANY_NP, esc);

View file

@ -307,7 +307,7 @@ extern void lru_add_drain(void);
extern void lru_add_drain_cpu(int cpu);
extern void lru_add_drain_all(void);
extern void rotate_reclaimable_page(struct page *page);
extern void deactivate_page(struct page *page);
extern void deactivate_file_page(struct page *page);
extern void swap_setup(void);
extern void add_page_to_unevictable_list(struct page *page);

View file

@ -146,12 +146,6 @@ typedef u64 dma_addr_t;
typedef u32 dma_addr_t;
#endif /* dma_addr_t */
#ifdef __CHECKER__
#else
#endif
#ifdef __CHECK_ENDIAN__
#else
#endif
typedef unsigned __bitwise__ gfp_t;
typedef unsigned __bitwise__ fmode_t;
typedef unsigned __bitwise__ oom_flags_t;

View file

@ -29,6 +29,7 @@ typedef struct {
#define KUIDT_INIT(value) (kuid_t){ value }
#define KGIDT_INIT(value) (kgid_t){ value }
#ifdef CONFIG_MULTIUSER
static inline uid_t __kuid_val(kuid_t uid)
{
return uid.val;
@ -38,6 +39,17 @@ static inline gid_t __kgid_val(kgid_t gid)
{
return gid.val;
}
#else
static inline uid_t __kuid_val(kuid_t uid)
{
return 0;
}
static inline gid_t __kgid_val(kgid_t gid)
{
return 0;
}
#endif
#define GLOBAL_ROOT_UID KUIDT_INIT(0)
#define GLOBAL_ROOT_GID KGIDT_INIT(0)

View file

@ -47,5 +47,6 @@ void *zs_map_object(struct zs_pool *pool, unsigned long handle,
void zs_unmap_object(struct zs_pool *pool, unsigned long handle);
unsigned long zs_get_total_pages(struct zs_pool *pool);
unsigned long zs_compact(struct zs_pool *pool);
#endif

View file

@ -0,0 +1,66 @@
#undef TRACE_SYSTEM
#define TRACE_SYSTEM cma
#if !defined(_TRACE_CMA_H) || defined(TRACE_HEADER_MULTI_READ)
#define _TRACE_CMA_H
#include <linux/types.h>
#include <linux/tracepoint.h>
TRACE_EVENT(cma_alloc,
TP_PROTO(unsigned long pfn, const struct page *page,
unsigned int count, unsigned int align),
TP_ARGS(pfn, page, count, align),
TP_STRUCT__entry(
__field(unsigned long, pfn)
__field(const struct page *, page)
__field(unsigned int, count)
__field(unsigned int, align)
),
TP_fast_assign(
__entry->pfn = pfn;
__entry->page = page;
__entry->count = count;
__entry->align = align;
),
TP_printk("pfn=%lx page=%p count=%u align=%u",
__entry->pfn,
__entry->page,
__entry->count,
__entry->align)
);
TRACE_EVENT(cma_release,
TP_PROTO(unsigned long pfn, const struct page *page,
unsigned int count),
TP_ARGS(pfn, page, count),
TP_STRUCT__entry(
__field(unsigned long, pfn)
__field(const struct page *, page)
__field(unsigned int, count)
),
TP_fast_assign(
__entry->pfn = pfn;
__entry->page = page;
__entry->count = count;
),
TP_printk("pfn=%lx page=%p count=%u",
__entry->pfn,
__entry->page,
__entry->count)
);
#endif /* _TRACE_CMA_H */
/* This part must be outside protection */
#include <trace/define_trace.h>

View file

@ -394,6 +394,7 @@ endchoice
config BSD_PROCESS_ACCT
bool "BSD Process Accounting"
depends on MULTIUSER
help
If you say Y here, a user level program will be able to instruct the
kernel (via a special system call) to write process accounting
@ -420,6 +421,7 @@ config BSD_PROCESS_ACCT_V3
config TASKSTATS
bool "Export task/process statistics through netlink"
depends on NET
depends on MULTIUSER
default n
help
Export selected statistics for tasks/processes through the
@ -1160,6 +1162,7 @@ config CHECKPOINT_RESTORE
menuconfig NAMESPACES
bool "Namespaces support" if EXPERT
depends on MULTIUSER
default !EXPERT
help
Provides the way to make tasks work with different objects using
@ -1356,11 +1359,25 @@ menuconfig EXPERT
config UID16
bool "Enable 16-bit UID system calls" if EXPERT
depends on HAVE_UID16
depends on HAVE_UID16 && MULTIUSER
default y
help
This enables the legacy 16-bit UID syscall wrappers.
config MULTIUSER
bool "Multiple users, groups and capabilities support" if EXPERT
default y
help
This option enables support for non-root users, groups and
capabilities.
If you say N here, all processes will run with UID 0, GID 0, and all
possible capabilities. Saying N here also compiles out support for
system calls related to UIDs, GIDs, and capabilities, such as setuid,
setgid, and capset.
If unsure, say Y here.
config SGETMASK_SYSCALL
bool "sgetmask/ssetmask syscalls support" if EXPERT
def_bool PARISC || MN10300 || BLACKFIN || M68K || PPC || MIPS || X86 || SPARC || CRIS || MICROBLAZE || SUPERH

View file

@ -1015,22 +1015,24 @@ static int sysvipc_msg_proc_show(struct seq_file *s, void *it)
struct user_namespace *user_ns = seq_user_ns(s);
struct msg_queue *msq = it;
return seq_printf(s,
"%10d %10d %4o %10lu %10lu %5u %5u %5u %5u %5u %5u %10lu %10lu %10lu\n",
msq->q_perm.key,
msq->q_perm.id,
msq->q_perm.mode,
msq->q_cbytes,
msq->q_qnum,
msq->q_lspid,
msq->q_lrpid,
from_kuid_munged(user_ns, msq->q_perm.uid),
from_kgid_munged(user_ns, msq->q_perm.gid),
from_kuid_munged(user_ns, msq->q_perm.cuid),
from_kgid_munged(user_ns, msq->q_perm.cgid),
msq->q_stime,
msq->q_rtime,
msq->q_ctime);
seq_printf(s,
"%10d %10d %4o %10lu %10lu %5u %5u %5u %5u %5u %5u %10lu %10lu %10lu\n",
msq->q_perm.key,
msq->q_perm.id,
msq->q_perm.mode,
msq->q_cbytes,
msq->q_qnum,
msq->q_lspid,
msq->q_lrpid,
from_kuid_munged(user_ns, msq->q_perm.uid),
from_kgid_munged(user_ns, msq->q_perm.gid),
from_kuid_munged(user_ns, msq->q_perm.cuid),
from_kgid_munged(user_ns, msq->q_perm.cgid),
msq->q_stime,
msq->q_rtime,
msq->q_ctime);
return 0;
}
#endif

View file

@ -2170,17 +2170,19 @@ static int sysvipc_sem_proc_show(struct seq_file *s, void *it)
sem_otime = get_semotime(sma);
return seq_printf(s,
"%10d %10d %4o %10u %5u %5u %5u %5u %10lu %10lu\n",
sma->sem_perm.key,
sma->sem_perm.id,
sma->sem_perm.mode,
sma->sem_nsems,
from_kuid_munged(user_ns, sma->sem_perm.uid),
from_kgid_munged(user_ns, sma->sem_perm.gid),
from_kuid_munged(user_ns, sma->sem_perm.cuid),
from_kgid_munged(user_ns, sma->sem_perm.cgid),
sem_otime,
sma->sem_ctime);
seq_printf(s,
"%10d %10d %4o %10u %5u %5u %5u %5u %10lu %10lu\n",
sma->sem_perm.key,
sma->sem_perm.id,
sma->sem_perm.mode,
sma->sem_nsems,
from_kuid_munged(user_ns, sma->sem_perm.uid),
from_kgid_munged(user_ns, sma->sem_perm.gid),
from_kuid_munged(user_ns, sma->sem_perm.cuid),
from_kgid_munged(user_ns, sma->sem_perm.cgid),
sem_otime,
sma->sem_ctime);
return 0;
}
#endif

View file

@ -1342,25 +1342,27 @@ static int sysvipc_shm_proc_show(struct seq_file *s, void *it)
#define SIZE_SPEC "%21lu"
#endif
return seq_printf(s,
"%10d %10d %4o " SIZE_SPEC " %5u %5u "
"%5lu %5u %5u %5u %5u %10lu %10lu %10lu "
SIZE_SPEC " " SIZE_SPEC "\n",
shp->shm_perm.key,
shp->shm_perm.id,
shp->shm_perm.mode,
shp->shm_segsz,
shp->shm_cprid,
shp->shm_lprid,
shp->shm_nattch,
from_kuid_munged(user_ns, shp->shm_perm.uid),
from_kgid_munged(user_ns, shp->shm_perm.gid),
from_kuid_munged(user_ns, shp->shm_perm.cuid),
from_kgid_munged(user_ns, shp->shm_perm.cgid),
shp->shm_atim,
shp->shm_dtim,
shp->shm_ctim,
rss * PAGE_SIZE,
swp * PAGE_SIZE);
seq_printf(s,
"%10d %10d %4o " SIZE_SPEC " %5u %5u "
"%5lu %5u %5u %5u %5u %10lu %10lu %10lu "
SIZE_SPEC " " SIZE_SPEC "\n",
shp->shm_perm.key,
shp->shm_perm.id,
shp->shm_perm.mode,
shp->shm_segsz,
shp->shm_cprid,
shp->shm_lprid,
shp->shm_nattch,
from_kuid_munged(user_ns, shp->shm_perm.uid),
from_kgid_munged(user_ns, shp->shm_perm.gid),
from_kuid_munged(user_ns, shp->shm_perm.cuid),
from_kgid_munged(user_ns, shp->shm_perm.cgid),
shp->shm_atim,
shp->shm_dtim,
shp->shm_ctim,
rss * PAGE_SIZE,
swp * PAGE_SIZE);
return 0;
}
#endif

View file

@ -837,8 +837,10 @@ static int sysvipc_proc_show(struct seq_file *s, void *it)
struct ipc_proc_iter *iter = s->private;
struct ipc_proc_iface *iface = iter->iface;
if (it == SEQ_START_TOKEN)
return seq_puts(s, iface->header);
if (it == SEQ_START_TOKEN) {
seq_puts(s, iface->header);
return 0;
}
return iface->show(s, it);
}

View file

@ -9,7 +9,9 @@ obj-y = fork.o exec_domain.o panic.o \
extable.o params.o \
kthread.o sys_ni.o nsproxy.o \
notifier.o ksysfs.o cred.o reboot.o \
async.o range.o groups.o smpboot.o
async.o range.o smpboot.o
obj-$(CONFIG_MULTIUSER) += groups.o
ifdef CONFIG_FUNCTION_TRACER
# Do not trace debug files and internal ftrace files

View file

@ -35,6 +35,7 @@ static int __init file_caps_disable(char *str)
}
__setup("no_file_caps", file_caps_disable);
#ifdef CONFIG_MULTIUSER
/*
* More recent versions of libcap are available from:
*
@ -386,6 +387,24 @@ bool ns_capable(struct user_namespace *ns, int cap)
}
EXPORT_SYMBOL(ns_capable);
/**
* capable - Determine if the current task has a superior capability in effect
* @cap: The capability to be tested for
*
* Return true if the current task has the given superior capability currently
* available for use, false if not.
*
* This sets PF_SUPERPRIV on the task if the capability is available on the
* assumption that it's about to be used.
*/
bool capable(int cap)
{
return ns_capable(&init_user_ns, cap);
}
EXPORT_SYMBOL(capable);
#endif /* CONFIG_MULTIUSER */
/**
* file_ns_capable - Determine if the file's opener had a capability in effect
* @file: The file we want to check
@ -411,22 +430,6 @@ bool file_ns_capable(const struct file *file, struct user_namespace *ns,
}
EXPORT_SYMBOL(file_ns_capable);
/**
* capable - Determine if the current task has a superior capability in effect
* @cap: The capability to be tested for
*
* Return true if the current task has the given superior capability currently
* available for use, false if not.
*
* This sets PF_SUPERPRIV on the task if the capability is available on the
* assumption that it's about to be used.
*/
bool capable(int cap)
{
return ns_capable(&init_user_ns, cap);
}
EXPORT_SYMBOL(capable);
/**
* capable_wrt_inode_uidgid - Check nsown_capable and uid and gid mapped
* @inode: The inode in question

View file

@ -4196,7 +4196,9 @@ static void *cgroup_pidlist_next(struct seq_file *s, void *v, loff_t *pos)
static int cgroup_pidlist_show(struct seq_file *s, void *v)
{
return seq_printf(s, "%d\n", *(int *)v);
seq_printf(s, "%d\n", *(int *)v);
return 0;
}
static u64 cgroup_read_notify_on_release(struct cgroup_subsys_state *css,
@ -5451,7 +5453,7 @@ struct cgroup_subsys_state *css_tryget_online_from_dir(struct dentry *dentry,
struct cgroup_subsys_state *css_from_id(int id, struct cgroup_subsys *ss)
{
WARN_ON_ONCE(!rcu_read_lock_held());
return idr_find(&ss->css_idr, id);
return id > 0 ? idr_find(&ss->css_idr, id) : NULL;
}
#ifdef CONFIG_CGROUP_DEBUG

View file

@ -29,6 +29,9 @@
static struct kmem_cache *cred_jar;
/* init to 2 - one for init_task, one to ensure it is never freed */
struct group_info init_groups = { .usage = ATOMIC_INIT(2) };
/*
* The initial credentials for the initial task
*/

View file

@ -9,9 +9,6 @@
#include <linux/user_namespace.h>
#include <asm/uaccess.h>
/* init to 2 - one for init_task, one to ensure it is never freed */
struct group_info init_groups = { .usage = ATOMIC_INIT(2) };
struct group_info *groups_alloc(int gidsetsize)
{
struct group_info *group_info;

View file

@ -169,7 +169,7 @@ static void check_hung_uninterruptible_tasks(unsigned long timeout)
return;
rcu_read_lock();
do_each_thread(g, t) {
for_each_process_thread(g, t) {
if (!max_count--)
goto unlock;
if (!--batch_count) {
@ -180,7 +180,7 @@ static void check_hung_uninterruptible_tasks(unsigned long timeout)
/* use "==" to skip the TASK_KILLABLE tasks waiting on NFS */
if (t->state == TASK_UNINTERRUPTIBLE)
check_hung_task(t, timeout);
} while_each_thread(g, t);
}
unlock:
rcu_read_unlock();
}

View file

@ -387,8 +387,9 @@ void ctrl_alt_del(void)
}
char poweroff_cmd[POWEROFF_CMD_PATH_LEN] = "/sbin/poweroff";
static const char reboot_cmd[] = "/sbin/reboot";
static int __orderly_poweroff(bool force)
static int run_cmd(const char *cmd)
{
char **argv;
static char *envp[] = {
@ -397,8 +398,7 @@ static int __orderly_poweroff(bool force)
NULL
};
int ret;
argv = argv_split(GFP_KERNEL, poweroff_cmd, NULL);
argv = argv_split(GFP_KERNEL, cmd, NULL);
if (argv) {
ret = call_usermodehelper(argv[0], argv, envp, UMH_WAIT_EXEC);
argv_free(argv);
@ -406,8 +406,33 @@ static int __orderly_poweroff(bool force)
ret = -ENOMEM;
}
return ret;
}
static int __orderly_reboot(void)
{
int ret;
ret = run_cmd(reboot_cmd);
if (ret) {
pr_warn("Failed to start orderly reboot: forcing the issue\n");
emergency_sync();
kernel_restart(NULL);
}
return ret;
}
static int __orderly_poweroff(bool force)
{
int ret;
ret = run_cmd(poweroff_cmd);
if (ret && force) {
pr_warn("Failed to start orderly shutdown: forcing the issue\n");
/*
* I guess this should try to kick off some daemon to sync and
* poweroff asap. Or not even bother syncing if we're doing an
@ -436,15 +461,33 @@ static DECLARE_WORK(poweroff_work, poweroff_work_func);
* This may be called from any context to trigger a system shutdown.
* If the orderly shutdown fails, it will force an immediate shutdown.
*/
int orderly_poweroff(bool force)
void orderly_poweroff(bool force)
{
if (force) /* do not override the pending "true" */
poweroff_force = true;
schedule_work(&poweroff_work);
return 0;
}
EXPORT_SYMBOL_GPL(orderly_poweroff);
static void reboot_work_func(struct work_struct *work)
{
__orderly_reboot();
}
static DECLARE_WORK(reboot_work, reboot_work_func);
/**
* orderly_reboot - Trigger an orderly system reboot
*
* This may be called from any context to trigger a system reboot.
* If the orderly reboot fails, it will force an immediate reboot.
*/
void orderly_reboot(void)
{
schedule_work(&reboot_work);
}
EXPORT_SYMBOL_GPL(orderly_reboot);
static int __init reboot_setup(char *str)
{
for (;;) {

View file

@ -1034,8 +1034,6 @@ resource_size_t resource_alignment(struct resource *res)
*
* request_region creates a new busy region.
*
* check_region returns non-zero if the area is already busy.
*
* release_region releases a matching busy region.
*/
@ -1097,36 +1095,6 @@ struct resource * __request_region(struct resource *parent,
}
EXPORT_SYMBOL(__request_region);
/**
* __check_region - check if a resource region is busy or free
* @parent: parent resource descriptor
* @start: resource start address
* @n: resource region size
*
* Returns 0 if the region is free at the moment it is checked,
* returns %-EBUSY if the region is busy.
*
* NOTE:
* This function is deprecated because its use is racy.
* Even if it returns 0, a subsequent call to request_region()
* may fail because another driver etc. just allocated the region.
* Do NOT use it. It will be removed from the kernel.
*/
int __check_region(struct resource *parent, resource_size_t start,
resource_size_t n)
{
struct resource * res;
res = __request_region(parent, start, n, "check-region", 0);
if (!res)
return -EBUSY;
release_resource(res);
free_resource(res);
return 0;
}
EXPORT_SYMBOL(__check_region);
/**
* __release_region - release a previously reserved resource region
* @parent: parent resource descriptor

View file

@ -325,6 +325,7 @@ SYSCALL_DEFINE2(getpriority, int, which, int, who)
* SMP: There are not races, the GIDs are checked only by filesystem
* operations (as far as semantic preservation is concerned).
*/
#ifdef CONFIG_MULTIUSER
SYSCALL_DEFINE2(setregid, gid_t, rgid, gid_t, egid)
{
struct user_namespace *ns = current_user_ns();
@ -815,6 +816,7 @@ SYSCALL_DEFINE1(setfsgid, gid_t, gid)
commit_creds(new);
return old_fsgid;
}
#endif /* CONFIG_MULTIUSER */
/**
* sys_getpid - return the thread group id of the current process

View file

@ -159,6 +159,20 @@ cond_syscall(sys_uselib);
cond_syscall(sys_fadvise64);
cond_syscall(sys_fadvise64_64);
cond_syscall(sys_madvise);
cond_syscall(sys_setuid);
cond_syscall(sys_setregid);
cond_syscall(sys_setgid);
cond_syscall(sys_setreuid);
cond_syscall(sys_setresuid);
cond_syscall(sys_getresuid);
cond_syscall(sys_setresgid);
cond_syscall(sys_getresgid);
cond_syscall(sys_setgroups);
cond_syscall(sys_getgroups);
cond_syscall(sys_setfsuid);
cond_syscall(sys_setfsgid);
cond_syscall(sys_capget);
cond_syscall(sys_capset);
/* arch-specific weak syscall entries */
cond_syscall(sys_pciconfig_read);

View file

@ -1335,6 +1335,15 @@ static struct ctl_table vm_table[] = {
.extra1 = &min_extfrag_threshold,
.extra2 = &max_extfrag_threshold,
},
{
.procname = "compact_unevictable_allowed",
.data = &sysctl_compact_unevictable_allowed,
.maxlen = sizeof(int),
.mode = 0644,
.proc_handler = proc_dointvec,
.extra1 = &zero,
.extra2 = &one,
},
#endif /* CONFIG_COMPACTION */
{

View file

@ -327,11 +327,11 @@ static void t_stop(struct seq_file *m, void *p)
local_irq_enable();
}
static int trace_lookup_stack(struct seq_file *m, long i)
static void trace_lookup_stack(struct seq_file *m, long i)
{
unsigned long addr = stack_dump_trace[i];
return seq_printf(m, "%pS\n", (void *)addr);
seq_printf(m, "%pS\n", (void *)addr);
}
static void print_disabled(struct seq_file *m)

View file

@ -247,10 +247,11 @@ size_t lc_seq_printf_stats(struct seq_file *seq, struct lru_cache *lc)
* progress) and "changed", when this in fact lead to an successful
* update of the cache.
*/
return seq_printf(seq, "\t%s: used:%u/%u "
"hits:%lu misses:%lu starving:%lu locked:%lu changed:%lu\n",
lc->name, lc->used, lc->nr_elements,
lc->hits, lc->misses, lc->starving, lc->locked, lc->changed);
seq_printf(seq, "\t%s: used:%u/%u hits:%lu misses:%lu starving:%lu locked:%lu changed:%lu\n",
lc->name, lc->used, lc->nr_elements,
lc->hits, lc->misses, lc->starving, lc->locked, lc->changed);
return 0;
}
static struct hlist_head *lc_hash_slot(struct lru_cache *lc, unsigned int enr)

View file

@ -239,29 +239,21 @@ int string_unescape(char *src, char *dst, size_t size, unsigned int flags)
}
EXPORT_SYMBOL(string_unescape);
static int escape_passthrough(unsigned char c, char **dst, size_t *osz)
static bool escape_passthrough(unsigned char c, char **dst, char *end)
{
char *out = *dst;
if (*osz < 1)
return -ENOMEM;
*out++ = c;
*dst = out;
*osz -= 1;
return 1;
if (out < end)
*out = c;
*dst = out + 1;
return true;
}
static int escape_space(unsigned char c, char **dst, size_t *osz)
static bool escape_space(unsigned char c, char **dst, char *end)
{
char *out = *dst;
unsigned char to;
if (*osz < 2)
return -ENOMEM;
switch (c) {
case '\n':
to = 'n';
@ -279,26 +271,25 @@ static int escape_space(unsigned char c, char **dst, size_t *osz)
to = 'f';
break;
default:
return 0;
return false;
}
*out++ = '\\';
*out++ = to;
if (out < end)
*out = '\\';
++out;
if (out < end)
*out = to;
++out;
*dst = out;
*osz -= 2;
return 1;
return true;
}
static int escape_special(unsigned char c, char **dst, size_t *osz)
static bool escape_special(unsigned char c, char **dst, char *end)
{
char *out = *dst;
unsigned char to;
if (*osz < 2)
return -ENOMEM;
switch (c) {
case '\\':
to = '\\';
@ -310,71 +301,78 @@ static int escape_special(unsigned char c, char **dst, size_t *osz)
to = 'e';
break;
default:
return 0;
return false;
}
*out++ = '\\';
*out++ = to;
if (out < end)
*out = '\\';
++out;
if (out < end)
*out = to;
++out;
*dst = out;
*osz -= 2;
return 1;
return true;
}
static int escape_null(unsigned char c, char **dst, size_t *osz)
static bool escape_null(unsigned char c, char **dst, char *end)
{
char *out = *dst;
if (*osz < 2)
return -ENOMEM;
if (c)
return 0;
return false;
*out++ = '\\';
*out++ = '0';
if (out < end)
*out = '\\';
++out;
if (out < end)
*out = '0';
++out;
*dst = out;
*osz -= 2;
return 1;
return true;
}
static int escape_octal(unsigned char c, char **dst, size_t *osz)
static bool escape_octal(unsigned char c, char **dst, char *end)
{
char *out = *dst;
if (*osz < 4)
return -ENOMEM;
*out++ = '\\';
*out++ = ((c >> 6) & 0x07) + '0';
*out++ = ((c >> 3) & 0x07) + '0';
*out++ = ((c >> 0) & 0x07) + '0';
if (out < end)
*out = '\\';
++out;
if (out < end)
*out = ((c >> 6) & 0x07) + '0';
++out;
if (out < end)
*out = ((c >> 3) & 0x07) + '0';
++out;
if (out < end)
*out = ((c >> 0) & 0x07) + '0';
++out;
*dst = out;
*osz -= 4;
return 1;
return true;
}
static int escape_hex(unsigned char c, char **dst, size_t *osz)
static bool escape_hex(unsigned char c, char **dst, char *end)
{
char *out = *dst;
if (*osz < 4)
return -ENOMEM;
*out++ = '\\';
*out++ = 'x';
*out++ = hex_asc_hi(c);
*out++ = hex_asc_lo(c);
if (out < end)
*out = '\\';
++out;
if (out < end)
*out = 'x';
++out;
if (out < end)
*out = hex_asc_hi(c);
++out;
if (out < end)
*out = hex_asc_lo(c);
++out;
*dst = out;
*osz -= 4;
return 1;
return true;
}
/**
@ -426,19 +424,17 @@ static int escape_hex(unsigned char c, char **dst, size_t *osz)
* it if needs.
*
* Return:
* The amount of the characters processed to the destination buffer, or
* %-ENOMEM if the size of buffer is not enough to put an escaped character is
* returned.
*
* Even in the case of error @dst pointer will be updated to point to the byte
* after the last processed character.
* The total size of the escaped output that would be generated for
* the given input and flags. To check whether the output was
* truncated, compare the return value to osz. There is room left in
* dst for a '\0' terminator if and only if ret < osz.
*/
int string_escape_mem(const char *src, size_t isz, char **dst, size_t osz,
int string_escape_mem(const char *src, size_t isz, char *dst, size_t osz,
unsigned int flags, const char *esc)
{
char *out = *dst, *p = out;
char *p = dst;
char *end = p + osz;
bool is_dict = esc && *esc;
int ret = 0;
while (isz--) {
unsigned char c = *src++;
@ -458,55 +454,26 @@ int string_escape_mem(const char *src, size_t isz, char **dst, size_t osz,
(is_dict && !strchr(esc, c))) {
/* do nothing */
} else {
if (flags & ESCAPE_SPACE) {
ret = escape_space(c, &p, &osz);
if (ret < 0)
break;
if (ret > 0)
continue;
}
if (flags & ESCAPE_SPACE && escape_space(c, &p, end))
continue;
if (flags & ESCAPE_SPECIAL) {
ret = escape_special(c, &p, &osz);
if (ret < 0)
break;
if (ret > 0)
continue;
}
if (flags & ESCAPE_SPECIAL && escape_special(c, &p, end))
continue;
if (flags & ESCAPE_NULL) {
ret = escape_null(c, &p, &osz);
if (ret < 0)
break;
if (ret > 0)
continue;
}
if (flags & ESCAPE_NULL && escape_null(c, &p, end))
continue;
/* ESCAPE_OCTAL and ESCAPE_HEX always go last */
if (flags & ESCAPE_OCTAL) {
ret = escape_octal(c, &p, &osz);
if (ret < 0)
break;
if (flags & ESCAPE_OCTAL && escape_octal(c, &p, end))
continue;
}
if (flags & ESCAPE_HEX) {
ret = escape_hex(c, &p, &osz);
if (ret < 0)
break;
if (flags & ESCAPE_HEX && escape_hex(c, &p, end))
continue;
}
}
ret = escape_passthrough(c, &p, &osz);
if (ret < 0)
break;
escape_passthrough(c, &p, end);
}
*dst = p;
if (ret < 0)
return ret;
return p - out;
return p - dst;
}
EXPORT_SYMBOL(string_escape_mem);

View file

@ -18,26 +18,26 @@ static const unsigned char data_b[] = {
static const unsigned char data_a[] = ".2.{....p..$}.4...1.....L...C...";
static const char *test_data_1_le[] __initconst = {
static const char * const test_data_1_le[] __initconst = {
"be", "32", "db", "7b", "0a", "18", "93", "b2",
"70", "ba", "c4", "24", "7d", "83", "34", "9b",
"a6", "9c", "31", "ad", "9c", "0f", "ac", "e9",
"4c", "d1", "19", "99", "43", "b1", "af", "0c",
};
static const char *test_data_2_le[] __initconst = {
static const char *test_data_2_le[] __initdata = {
"32be", "7bdb", "180a", "b293",
"ba70", "24c4", "837d", "9b34",
"9ca6", "ad31", "0f9c", "e9ac",
"d14c", "9919", "b143", "0caf",
};
static const char *test_data_4_le[] __initconst = {
static const char *test_data_4_le[] __initdata = {
"7bdb32be", "b293180a", "24c4ba70", "9b34837d",
"ad319ca6", "e9ac0f9c", "9919d14c", "0cafb143",
};
static const char *test_data_8_le[] __initconst = {
static const char *test_data_8_le[] __initdata = {
"b293180a7bdb32be", "9b34837d24c4ba70",
"e9ac0f9cad319ca6", "0cafb1439919d14c",
};

View file

@ -260,16 +260,28 @@ static __init const char *test_string_find_match(const struct test_string_2 *s2,
return NULL;
}
static __init void
test_string_escape_overflow(const char *in, int p, unsigned int flags, const char *esc,
int q_test, const char *name)
{
int q_real;
q_real = string_escape_mem(in, p, NULL, 0, flags, esc);
if (q_real != q_test)
pr_warn("Test '%s' failed: flags = %u, osz = 0, expected %d, got %d\n",
name, flags, q_test, q_real);
}
static __init void test_string_escape(const char *name,
const struct test_string_2 *s2,
unsigned int flags, const char *esc)
{
int q_real = 512;
char *out_test = kmalloc(q_real, GFP_KERNEL);
char *out_real = kmalloc(q_real, GFP_KERNEL);
size_t out_size = 512;
char *out_test = kmalloc(out_size, GFP_KERNEL);
char *out_real = kmalloc(out_size, GFP_KERNEL);
char *in = kmalloc(256, GFP_KERNEL);
char *buf = out_real;
int p = 0, q_test = 0;
int q_real;
if (!out_test || !out_real || !in)
goto out;
@ -301,29 +313,19 @@ static __init void test_string_escape(const char *name,
q_test += len;
}
q_real = string_escape_mem(in, p, &buf, q_real, flags, esc);
q_real = string_escape_mem(in, p, out_real, out_size, flags, esc);
test_string_check_buf(name, flags, in, p, out_real, q_real, out_test,
q_test);
test_string_escape_overflow(in, p, flags, esc, q_test, name);
out:
kfree(in);
kfree(out_real);
kfree(out_test);
}
static __init void test_string_escape_nomem(void)
{
char *in = "\eb \\C\007\"\x90\r]";
char out[64], *buf = out;
int rc = -ENOMEM, ret;
ret = string_escape_str_any_np(in, &buf, strlen(in), NULL);
if (ret == rc)
return;
pr_err("Test 'escape nomem' failed: got %d instead of %d\n", ret, rc);
}
static int __init test_string_helpers_init(void)
{
unsigned int i;
@ -342,8 +344,6 @@ static int __init test_string_helpers_init(void)
for (i = 0; i < (ESCAPE_ANY_NP | ESCAPE_HEX) + 1; i++)
test_string_escape("escape 1", escape1, i, TEST_STRING_2_DICT_1);
test_string_escape_nomem();
return -EINVAL;
}
module_init(test_string_helpers_init);

View file

@ -17,6 +17,7 @@
*/
#include <stdarg.h>
#include <linux/clk-provider.h>
#include <linux/module.h> /* for KSYM_SYMBOL_LEN */
#include <linux/types.h>
#include <linux/string.h>
@ -340,11 +341,11 @@ int num_to_str(char *buf, int size, unsigned long long num)
return len;
}
#define ZEROPAD 1 /* pad with zero */
#define SIGN 2 /* unsigned/signed long */
#define SIGN 1 /* unsigned/signed, must be 1 */
#define LEFT 2 /* left justified */
#define PLUS 4 /* show plus */
#define SPACE 8 /* space if plus */
#define LEFT 16 /* left justified */
#define ZEROPAD 16 /* pad with zero, must be 16 == '0' - ' ' */
#define SMALL 32 /* use lowercase in hex (must be 32 == 0x20) */
#define SPECIAL 64 /* prefix hex with "0x", octal with "0" */
@ -383,10 +384,7 @@ static noinline_for_stack
char *number(char *buf, char *end, unsigned long long num,
struct printf_spec spec)
{
/* we are called with base 8, 10 or 16, only, thus don't need "G..." */
static const char digits[16] = "0123456789ABCDEF"; /* "GHIJKLMNOPQRSTUVWXYZ"; */
char tmp[66];
char tmp[3 * sizeof(num)];
char sign;
char locase;
int need_pfx = ((spec.flags & SPECIAL) && spec.base != 10);
@ -422,12 +420,7 @@ char *number(char *buf, char *end, unsigned long long num,
/* generate full string in tmp[], in reverse order */
i = 0;
if (num < spec.base)
tmp[i++] = digits[num] | locase;
/* Generic code, for any base:
else do {
tmp[i++] = (digits[do_div(num,base)] | locase);
} while (num != 0);
*/
tmp[i++] = hex_asc_upper[num] | locase;
else if (spec.base != 10) { /* 8 or 16 */
int mask = spec.base - 1;
int shift = 3;
@ -435,7 +428,7 @@ char *number(char *buf, char *end, unsigned long long num,
if (spec.base == 16)
shift = 4;
do {
tmp[i++] = (digits[((unsigned char)num) & mask] | locase);
tmp[i++] = (hex_asc_upper[((unsigned char)num) & mask] | locase);
num >>= shift;
} while (num);
} else { /* base 10 */
@ -447,7 +440,7 @@ char *number(char *buf, char *end, unsigned long long num,
spec.precision = i;
/* leading space padding */
spec.field_width -= spec.precision;
if (!(spec.flags & (ZEROPAD+LEFT))) {
if (!(spec.flags & (ZEROPAD | LEFT))) {
while (--spec.field_width >= 0) {
if (buf < end)
*buf = ' ';
@ -475,7 +468,8 @@ char *number(char *buf, char *end, unsigned long long num,
}
/* zero or space padding */
if (!(spec.flags & LEFT)) {
char c = (spec.flags & ZEROPAD) ? '0' : ' ';
char c = ' ' + (spec.flags & ZEROPAD);
BUILD_BUG_ON(' ' + ZEROPAD != '0');
while (--spec.field_width >= 0) {
if (buf < end)
*buf = c;
@ -783,11 +777,19 @@ char *hex_string(char *buf, char *end, u8 *addr, struct printf_spec spec,
if (spec.field_width > 0)
len = min_t(int, spec.field_width, 64);
for (i = 0; i < len && buf < end - 1; i++) {
buf = hex_byte_pack(buf, addr[i]);
for (i = 0; i < len; ++i) {
if (buf < end)
*buf = hex_asc_hi(addr[i]);
++buf;
if (buf < end)
*buf = hex_asc_lo(addr[i]);
++buf;
if (buf < end && separator && i != len - 1)
*buf++ = separator;
if (separator && i != len - 1) {
if (buf < end)
*buf = separator;
++buf;
}
}
return buf;
@ -1233,8 +1235,12 @@ char *escaped_string(char *buf, char *end, u8 *addr, struct printf_spec spec,
len = spec.field_width < 0 ? 1 : spec.field_width;
/* Ignore the error. We print as many characters as we can */
string_escape_mem(addr, len, &buf, end - buf, flags, NULL);
/*
* string_escape_mem() writes as many characters as it can to
* the given buffer, and returns the total size of the output
* had the buffer been big enough.
*/
buf += string_escape_mem(addr, len, buf, buf < end ? end - buf : 0, flags, NULL);
return buf;
}
@ -1322,6 +1328,30 @@ char *address_val(char *buf, char *end, const void *addr,
return number(buf, end, num, spec);
}
static noinline_for_stack
char *clock(char *buf, char *end, struct clk *clk, struct printf_spec spec,
const char *fmt)
{
if (!IS_ENABLED(CONFIG_HAVE_CLK) || !clk)
return string(buf, end, NULL, spec);
switch (fmt[1]) {
case 'r':
return number(buf, end, clk_get_rate(clk), spec);
case 'n':
default:
#ifdef CONFIG_COMMON_CLK
return string(buf, end, __clk_get_name(clk), spec);
#else
spec.base = 16;
spec.field_width = sizeof(unsigned long) * 2 + 2;
spec.flags |= SPECIAL | SMALL | ZEROPAD;
return number(buf, end, (unsigned long)clk, spec);
#endif
}
}
int kptr_restrict __read_mostly;
/*
@ -1404,6 +1434,11 @@ int kptr_restrict __read_mostly;
* (default assumed to be phys_addr_t, passed by reference)
* - 'd[234]' For a dentry name (optionally 2-4 last components)
* - 'D[234]' Same as 'd' but for a struct file
* - 'C' For a clock, it prints the name (Common Clock Framework) or address
* (legacy clock framework) of the clock
* - 'Cn' For a clock, it prints the name (Common Clock Framework) or address
* (legacy clock framework) of the clock
* - 'Cr' For a clock, it prints the current rate of the clock
*
* Note: The difference between 'S' and 'F' is that on ia64 and ppc64
* function pointers are really function descriptors, which contain a
@ -1548,6 +1583,8 @@ char *pointer(const char *fmt, char *buf, char *end, void *ptr,
return address_val(buf, end, ptr, spec, fmt);
case 'd':
return dentry_name(buf, end, ptr, spec, fmt);
case 'C':
return clock(buf, end, ptr, spec, fmt);
case 'D':
return dentry_name(buf, end,
((const struct file *)ptr)->f_path.dentry,
@ -1738,29 +1775,21 @@ int format_decode(const char *fmt, struct printf_spec *spec)
if (spec->qualifier == 'L')
spec->type = FORMAT_TYPE_LONG_LONG;
else if (spec->qualifier == 'l') {
if (spec->flags & SIGN)
spec->type = FORMAT_TYPE_LONG;
else
spec->type = FORMAT_TYPE_ULONG;
BUILD_BUG_ON(FORMAT_TYPE_ULONG + SIGN != FORMAT_TYPE_LONG);
spec->type = FORMAT_TYPE_ULONG + (spec->flags & SIGN);
} else if (_tolower(spec->qualifier) == 'z') {
spec->type = FORMAT_TYPE_SIZE_T;
} else if (spec->qualifier == 't') {
spec->type = FORMAT_TYPE_PTRDIFF;
} else if (spec->qualifier == 'H') {
if (spec->flags & SIGN)
spec->type = FORMAT_TYPE_BYTE;
else
spec->type = FORMAT_TYPE_UBYTE;
BUILD_BUG_ON(FORMAT_TYPE_UBYTE + SIGN != FORMAT_TYPE_BYTE);
spec->type = FORMAT_TYPE_UBYTE + (spec->flags & SIGN);
} else if (spec->qualifier == 'h') {
if (spec->flags & SIGN)
spec->type = FORMAT_TYPE_SHORT;
else
spec->type = FORMAT_TYPE_USHORT;
BUILD_BUG_ON(FORMAT_TYPE_USHORT + SIGN != FORMAT_TYPE_SHORT);
spec->type = FORMAT_TYPE_USHORT + (spec->flags & SIGN);
} else {
if (spec->flags & SIGN)
spec->type = FORMAT_TYPE_INT;
else
spec->type = FORMAT_TYPE_UINT;
BUILD_BUG_ON(FORMAT_TYPE_UINT + SIGN != FORMAT_TYPE_INT);
spec->type = FORMAT_TYPE_UINT + (spec->flags & SIGN);
}
return ++fmt - start;
@ -1800,6 +1829,11 @@ int format_decode(const char *fmt, struct printf_spec *spec)
* %*pE[achnops] print an escaped buffer
* %*ph[CDN] a variable-length hex string with a separator (supports up to 64
* bytes of the input)
* %pC output the name (Common Clock Framework) or address (legacy clock
* framework) of a clock
* %pCn output the name (Common Clock Framework) or address (legacy clock
* framework) of a clock
* %pCr output the current rate of a clock
* %n is ignored
*
* ** Please update Documentation/printk-formats.txt when making changes **

Some files were not shown because too many files have changed in this diff Show more