Commit graph

373179 commits

Author SHA1 Message Date
Jamal Hadi Salim
0dcffd0964 net_sched: act_ipt forward compat with xtables
Deal with changes in newer xtables while maintaining backward
compatibility. Thanks to Jan Engelhardt for suggestions.

Signed-off-by: Jamal Hadi Salim <jhs@mojatatu.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2013-05-01 13:19:19 -04:00
Linus Torvalds
a49fe6d59a Merge branch 'topic/omap3isp' of git://git.kernel.org/pub/scm/linux/kernel/git/mchehab/linux-media
Pull omap3isp clk support from Mauro Carvalho Chehab:
 "This patch were sent in separate as it depends on a merge from clock
  framework, that you merged in commit 362ed48dee50"

* 'topic/omap3isp' of git://git.kernel.org/pub/scm/linux/kernel/git/mchehab/linux-media:
  [media] omap3isp: Use the common clock framework
2013-05-01 09:57:04 -07:00
Dave Kleikamp
73aaa22d5f jfs: fix a couple races
This patch fixes races uncovered by xfstests testcase 068.

One race is the result of jfs_sync() trying to write a sync point to the
journal after it has been frozen (or possibly in the process). Since
freezing sync's the journal, there is no need to write a sync point so
we simply want to return.

The second involves jfs_write_inode() being called on a deleted inode.
It calls jfs_flush_journal which is held up by the jfs_commit thread
doing the final iput on the same deleted inode, which itself is
waiting for the I_SYNC flag to be cleared. jfs_write_inode need not
do anything when i_nlink is zero, which is the easy fix.

Reported-by: Michael L. Semon <mlsemon35@gmail.com>
Signed-off-by: Dave Kleikamp <dave.kleikamp@oracle.com>
2013-05-01 11:16:59 -05:00
Dmitry Torokhov
bf61c8840e Merge branch 'next' into for-linus
Prepare first set of updates for 3.10 merge window.
2013-05-01 08:47:44 -07:00
Linus Torvalds
823e75f723 Merge branch 'ipc-scalability'
Merge IPC cleanup and scalability patches from Andrew Morton.

This cleans up many of the oddities in the IPC code, uses the list
iterator helpers, splits out locking and adds per-semaphore locks for
greater scalability of the IPC semaphore code.

Most normal user-level locking by now uses futexes (ie pthreads, but
also a lot of specialized locks), but SysV IPC semaphores are apparently
still used in some big applications, either for portability reasons, or
because they offer tracking and undo (and you don't need to have a
special shared memory area for them).

Our IPC semaphore scalability was pitiful.  We used to lock much too big
ranges, and we used to have a single ipc lock per ipc semaphore array.
Most loads never cared, but some do.  There are some numbers in the
individual commits.

* ipc-scalability:
  ipc: sysv shared memory limited to 8TiB
  ipc/msg.c: use list_for_each_entry_[safe] for list traversing
  ipc,sem: fine grained locking for semtimedop
  ipc,sem: have only one list in struct sem_queue
  ipc,sem: open code and rename sem_lock
  ipc,sem: do not hold ipc lock more than necessary
  ipc: introduce lockless pre_down ipcctl
  ipc: introduce obtaining a lockless ipc object
  ipc: remove bogus lock comment for ipc_checkid
  ipc/msgutil.c: use linux/uaccess.h
  ipc: refactor msg list search into separate function
  ipc: simplify msg list search
  ipc: implement MSG_COPY as a new receive mode
  ipc: remove msg handling from queue scan
  ipc: set EFAULT as default error in load_msg()
  ipc: tighten msg copy loops
  ipc: separate msg allocation from userspace copy
  ipc: clamp with min()
2013-05-01 08:17:51 -07:00
Robin Holt
d69f3bad46 ipc: sysv shared memory limited to 8TiB
Trying to run an application which was trying to put data into half of
memory using shmget(), we found that having a shmall value below 8EiB-8TiB
would prevent us from using anything more than 8TiB.  By setting
kernel.shmall greater than 8EiB-8TiB would make the job work.

In the newseg() function, ns->shm_tot which, at 8TiB is INT_MAX.

ipc/shm.c:
 458 static int newseg(struct ipc_namespace *ns, struct ipc_params *params)
 459 {
...
 465         int numpages = (size + PAGE_SIZE -1) >> PAGE_SHIFT;
...
 474         if (ns->shm_tot + numpages > ns->shm_ctlall)
 475                 return -ENOSPC;

[akpm@linux-foundation.org: make ipc/shm.c:newseg()'s numpages size_t, not int]
Signed-off-by: Robin Holt <holt@sgi.com>
Reported-by: Alex Thorlton <athorlton@sgi.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-05-01 08:12:58 -07:00
Nikola Pajkovsky
41239fe82d ipc/msg.c: use list_for_each_entry_[safe] for list traversing
The ipc/msg.c code does its list operations by hand and it open-codes the
accesses, instead of using for_each_entry_[safe].

Signed-off-by: Nikola Pajkovsky <npajkovs@redhat.com>
Cc: Stanislav Kinsbursky <skinsbursky@parallels.com>
Cc: "Eric W. Biederman" <ebiederm@xmission.com>
Cc: Peter Hurley <peter@hurleysoftware.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-05-01 08:12:58 -07:00
Rik van Riel
6062a8dc05 ipc,sem: fine grained locking for semtimedop
Introduce finer grained locking for semtimedop, to handle the common case
of a program wanting to manipulate one semaphore from an array with
multiple semaphores.

If the call is a semop manipulating just one semaphore in an array with
multiple semaphores, only take the lock for that semaphore itself.

If the call needs to manipulate multiple semaphores, or another caller is
in a transaction that manipulates multiple semaphores, the sem_array lock
is taken, as well as all the locks for the individual semaphores.

On a 24 CPU system, performance numbers with the semop-multi
test with N threads and N semaphores, look like this:

	vanilla		Davidlohr's	Davidlohr's +	Davidlohr's +
threads			patches		rwlock patches	v3 patches
10	610652		726325		1783589		2142206
20	341570		365699		1520453		1977878
30	288102		307037		1498167		2037995
40	290714		305955		1612665		2256484
50	288620		312890		1733453		2650292
60	289987		306043		1649360		2388008
70	291298		306347		1723167		2717486
80	290948		305662		1729545		2763582
90	290996		306680		1736021		2757524
100	292243		306700		1773700		3059159

[davidlohr.bueso@hp.com: do not call sem_lock when bogus sma]
[davidlohr.bueso@hp.com: make refcounter atomic]
Signed-off-by: Rik van Riel <riel@redhat.com>
Suggested-by: Linus Torvalds <torvalds@linux-foundation.org>
Acked-by: Davidlohr Bueso <davidlohr.bueso@hp.com>
Cc: Chegu Vinod <chegu_vinod@hp.com>
Cc: Jason Low <jason.low2@hp.com>
Reviewed-by: Michel Lespinasse <walken@google.com>
Cc: Peter Hurley <peter@hurleysoftware.com>
Cc: Stanislav Kinsbursky <skinsbursky@parallels.com>
Tested-by: Emmanuel Benisty <benisty.e@gmail.com>
Tested-by: Sedat Dilek <sedat.dilek@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-05-01 08:12:58 -07:00
Rik van Riel
9f1bc2c902 ipc,sem: have only one list in struct sem_queue
Having only one list in struct sem_queue, and only queueing simple
semaphore operations on the list for the semaphore involved, allows us to
introduce finer grained locking for semtimedop.

Signed-off-by: Rik van Riel <riel@redhat.com>
Acked-by: Davidlohr Bueso <davidlohr.bueso@hp.com>
Cc: Chegu Vinod <chegu_vinod@hp.com>
Cc: Emmanuel Benisty <benisty.e@gmail.com>
Cc: Jason Low <jason.low2@hp.com>
Cc: Michel Lespinasse <walken@google.com>
Cc: Peter Hurley <peter@hurleysoftware.com>
Cc: Stanislav Kinsbursky <skinsbursky@parallels.com>
Tested-by: Sedat Dilek <sedat.dilek@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-05-01 08:12:58 -07:00
Rik van Riel
c460b662d5 ipc,sem: open code and rename sem_lock
Rename sem_lock() to sem_obtain_lock(), so we can introduce a sem_lock()
later that only locks the sem_array and does nothing else.

Open code the locking from ipc_lock() in sem_obtain_lock() so we can
introduce finer grained locking for the sem_array in the next patch.

[akpm@linux-foundation.org: propagate the ipc_obtain_object() errno out of sem_obtain_lock()]
Signed-off-by: Rik van Riel <riel@redhat.com>
Acked-by: Davidlohr Bueso <davidlohr.bueso@hp.com>
Cc: Chegu Vinod <chegu_vinod@hp.com>
Cc: Emmanuel Benisty <benisty.e@gmail.com>
Cc: Jason Low <jason.low2@hp.com>
Cc: Michel Lespinasse <walken@google.com>
Cc: Peter Hurley <peter@hurleysoftware.com>
Cc: Stanislav Kinsbursky <skinsbursky@parallels.com>
Tested-by: Sedat Dilek <sedat.dilek@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-05-01 08:12:58 -07:00
Davidlohr Bueso
16df3674ef ipc,sem: do not hold ipc lock more than necessary
Instead of holding the ipc lock for permissions and security checks, among
others, only acquire it when necessary.

Some numbers....

1) With Rik's semop-multi.c microbenchmark we can see the following
   results:

Baseline (3.9-rc1):
cpus 4, threads: 256, semaphores: 128, test duration: 30 secs
total operations: 151452270, ops/sec 5048409

+  59.40%            a.out  [kernel.kallsyms]  [k] _raw_spin_lock
+   6.14%            a.out  [kernel.kallsyms]  [k] sys_semtimedop
+   3.84%            a.out  [kernel.kallsyms]  [k] avc_has_perm_flags
+   3.64%            a.out  [kernel.kallsyms]  [k] __audit_syscall_exit
+   2.06%            a.out  [kernel.kallsyms]  [k] copy_user_enhanced_fast_string
+   1.86%            a.out  [kernel.kallsyms]  [k] ipc_lock

With this patchset:
cpus 4, threads: 256, semaphores: 128, test duration: 30 secs
total operations: 273156400, ops/sec 9105213

+  18.54%            a.out  [kernel.kallsyms]  [k] _raw_spin_lock
+  11.72%            a.out  [kernel.kallsyms]  [k] sys_semtimedop
+   7.70%            a.out  [kernel.kallsyms]  [k] ipc_has_perm.isra.21
+   6.58%            a.out  [kernel.kallsyms]  [k] avc_has_perm_flags
+   6.54%            a.out  [kernel.kallsyms]  [k] __audit_syscall_exit
+   4.71%            a.out  [kernel.kallsyms]  [k] ipc_obtain_object_check

2) While on an Oracle swingbench DSS (data mining) workload the
   improvements are not as exciting as with Rik's benchmark, we can see
   some positive numbers.  For an 8 socket machine the following are the
   percentages of %sys time incurred in the ipc lock:

Baseline (3.9-rc1):
100 swingbench users: 8,74%
400 swingbench users: 21,86%
800 swingbench users: 84,35%

With this patchset:
100 swingbench users: 8,11%
400 swingbench users: 19,93%
800 swingbench users: 77,69%

[riel@redhat.com: fix two locking bugs]
[sasha.levin@oracle.com: prevent releasing RCU read lock twice in semctl_main]
[akpm@linux-foundation.org: coding-style fixes]
Signed-off-by: Davidlohr Bueso <davidlohr.bueso@hp.com>
Signed-off-by: Rik van Riel <riel@redhat.com>
Reviewed-by: Chegu Vinod <chegu_vinod@hp.com>
Acked-by: Michel Lespinasse <walken@google.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Jason Low <jason.low2@hp.com>
Cc: Emmanuel Benisty <benisty.e@gmail.com>
Cc: Peter Hurley <peter@hurleysoftware.com>
Cc: Stanislav Kinsbursky <skinsbursky@parallels.com>
Tested-by: Sedat Dilek <sedat.dilek@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-05-01 08:12:58 -07:00
Davidlohr Bueso
444d0f621b ipc: introduce lockless pre_down ipcctl
Various forms of ipc use ipcctl_pre_down() to retrieve an ipc object and
check permissions, mostly for IPC_RMID and IPC_SET commands.

Introduce ipcctl_pre_down_nolock(), a lockless version of this function.
The locking version is retained, yet modified to call the nolock version
without affecting its semantics, thus transparent to all ipc callers.

Signed-off-by: Davidlohr Bueso <davidlohr.bueso@hp.com>
Signed-off-by: Rik van Riel <riel@redhat.com>
Suggested-by: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Chegu Vinod <chegu_vinod@hp.com>
Cc: Emmanuel Benisty <benisty.e@gmail.com>
Cc: Jason Low <jason.low2@hp.com>
Cc: Michel Lespinasse <walken@google.com>
Cc: Peter Hurley <peter@hurleysoftware.com>
Cc: Stanislav Kinsbursky <skinsbursky@parallels.com>
Tested-by: Sedat Dilek <sedat.dilek@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-05-01 08:12:58 -07:00
Davidlohr Bueso
4d2bff5eb8 ipc: introduce obtaining a lockless ipc object
Through ipc_lock() and therefore ipc_lock_check() we currently return the
locked ipc object.  This is not necessary for all situations and can,
therefore, cause unnecessary ipc lock contention.

Introduce analogous ipc_obtain_object() and ipc_obtain_object_check()
functions that only lookup and return the ipc object.

Both these functions must be called within the RCU read critical section.

[akpm@linux-foundation.org: propagate the ipc_obtain_object() errno from ipc_lock()]
Signed-off-by: Davidlohr Bueso <davidlohr.bueso@hp.com>
Signed-off-by: Rik van Riel <riel@redhat.com>
Reviewed-by: Chegu Vinod <chegu_vinod@hp.com>
Acked-by: Michel Lespinasse <walken@google.com>
Cc: Emmanuel Benisty <benisty.e@gmail.com>
Cc: Jason Low <jason.low2@hp.com>
Cc: Peter Hurley <peter@hurleysoftware.com>
Cc: Stanislav Kinsbursky <skinsbursky@parallels.com>
Tested-by: Sedat Dilek <sedat.dilek@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-05-01 08:12:57 -07:00
Davidlohr Bueso
7bb4deff61 ipc: remove bogus lock comment for ipc_checkid
This series makes the sysv semaphore code more scalable, by reducing the
time the semaphore lock is held, and making the locking more scalable for
semaphore arrays with multiple semaphores.

The first four patches were written by Davidlohr Buesso, and reduce the
hold time of the semaphore lock.

The last three patches change the sysv semaphore code locking to be more
fine grained, providing a performance boost when multiple semaphores in a
semaphore array are being manipulated simultaneously.

On a 24 CPU system, performance numbers with the semop-multi
test with N threads and N semaphores, look like this:

	vanilla		Davidlohr's	Davidlohr's +	Davidlohr's +
	threads			patches		rwlock patches	v3 patches
	10	610652		726325		1783589		2142206
	20	341570		365699		1520453		1977878
	30	288102		307037		1498167		2037995
	40	290714		305955		1612665		2256484
	50	288620		312890		1733453		2650292
	60	289987		306043		1649360		2388008
	70	291298		306347		1723167		2717486
	80	290948		305662		1729545		2763582
	90	290996		306680		1736021		2757524
	100	292243		306700		1773700		3059159

This patch:

There is no reason to be holding the ipc lock while reading ipcp->seq,
hence remove misleading comment.

Also simplify the return value for the function.

Signed-off-by: Davidlohr Bueso <davidlohr.bueso@hp.com>
Signed-off-by: Rik van Riel <riel@redhat.com>
Cc: Chegu Vinod <chegu_vinod@hp.com>
Cc: Emmanuel Benisty <benisty.e@gmail.com>
Cc: Jason Low <jason.low2@hp.com>
Cc: Michel Lespinasse <walken@google.com>
Cc: Peter Hurley <peter@hurleysoftware.com>
Cc: Stanislav Kinsbursky <skinsbursky@parallels.com>
Tested-by: Sedat Dilek <sedat.dilek@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-05-01 08:12:57 -07:00
HoSung Jung
1e3c941c52 ipc/msgutil.c: use linux/uaccess.h
Signed-off-by: HoSung Jung <rain6557@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-05-01 08:12:57 -07:00
Peter Hurley
daaf74cf08 ipc: refactor msg list search into separate function
[fengguang.wu@intel.com: find_msg can be static]
Signed-off-by: Peter Hurley <peter@hurleysoftware.com>
Cc: Fengguang Wu <fengguang.wu@intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-05-01 08:12:57 -07:00
Peter Hurley
d076ac9112 ipc: simplify msg list search
Signed-off-by: Peter Hurley <peter@hurleysoftware.com>
Acked-by: Stanislav Kinsbursky <skinsbursky@parallels.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-05-01 08:12:57 -07:00
Peter Hurley
8ac6ed5857 ipc: implement MSG_COPY as a new receive mode
Teach the helper routines about MSG_COPY so that msgtyp is preserved as
the message number to copy.

The security functions affected by this change were audited and no
additional changes are necessary.

Signed-off-by: Peter Hurley <peter@hurleysoftware.com>
Acked-by: Stanislav Kinsbursky <skinsbursky@parallels.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-05-01 08:12:57 -07:00
Peter Hurley
852028af86 ipc: remove msg handling from queue scan
In preparation for refactoring the queue scan into a separate
function, relocate msg copying.

Signed-off-by: Peter Hurley <peter@hurleysoftware.com>
Acked-by: Stanislav Kinsbursky <skinsbursky@parallels.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-05-01 08:12:57 -07:00
Peter Hurley
2b3097a294 ipc: set EFAULT as default error in load_msg()
Signed-off-by: Peter Hurley <peter@hurleysoftware.com>
Acked-by: Stanislav Kinsbursky <skinsbursky@parallels.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-05-01 08:12:57 -07:00
Peter Hurley
da085d4591 ipc: tighten msg copy loops
Signed-off-by: Peter Hurley <peter@hurleysoftware.com>
Acked-by: Stanislav Kinsbursky <skinsbursky@parallels.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-05-01 08:12:57 -07:00
Peter Hurley
be5f4b335f ipc: separate msg allocation from userspace copy
Separating msg allocation enables single-block vmalloc
allocation instead.

Signed-off-by: Peter Hurley <peter@hurleysoftware.com>
Acked-by: Stanislav Kinsbursky <skinsbursky@parallels.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-05-01 08:12:57 -07:00
Peter Hurley
3d8fa456d5 ipc: clamp with min()
Signed-off-by: Peter Hurley <peter@hurleysoftware.com>
Acked-by: Stanislav Kinsbursky <skinsbursky@parallels.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-05-01 08:12:57 -07:00
Linus Torvalds
149b306089 Mostly performance and bug fixes, plus some cleanups. The one new
feature this merge window is a new ioctl EXT4_IOC_SWAP_BOOT which
 allows installation of a hidden inode designed for boot loaders.
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1.4.12 (GNU/Linux)
 
 iQIcBAABCAAGBQJRfpDwAAoJENNvdpvBGATwsjwP/17V3AN6XTEhZK80p3/qN5YD
 N2QIHeyYIqCGpczLs2TQEkxWX6nqpDggAPXY956wvgeEMQV+pQ+DLO4Ol9+p5WD2
 hrklleYhtOjFQ3Xh4lqrEi5FzKVzWagVDLqgUjALJ+D+hkDB7ZQT/fm2sH45rzot
 xBp3aVqANU8GqAAbEW4/Ng9ZGMx0dpANiU2svbjM71sv2dCLFmWAkz+GgZsMbuJZ
 vnKIZP6I6plwP3LuZzEbVCA7F2PzC4ywEOJKjIEvgHpX6uMDR3FX8pD5Dlo/o6e2
 eP+KLnD43mJMxBmTn22x5Sm0N6DUzJCEELRJWB9wCZoLdEvbEWRxT3qsPXfLWelG
 2jj4bImXF2CqYEsJww5FV2WdXXdnuM57pZym5vMZGAFyKPSCJobA4Y3XRdXkBfXf
 Gq/cFoPYv2EcBIhz3zrRj+tbY8esbO9wOnF6+x+AF10BspD2V7nuoVdWVhOf0A3v
 i9ifGPwLk3e3xHr9oXheo7IWn52oviZeyD77d7D7MLhgn+xU4LaVhW3R63Q+mI4D
 0TXG25R1CVcE7wyFy3gqSVXSCDO0JcQBL5LgcL+wAGXcHPAXqBpN2DFTPo+9fJH2
 g3YMwr+wMbci1XRVQ2vdTt/nBZYjOCh6PgRmg3KjTz11Ra5EsjQvYjKWYwqf2RGn
 QhCgbzd/qtZfNJztLvr7
 =GCT2
 -----END PGP SIGNATURE-----

Merge tag 'ext4_for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tytso/ext4

Pull ext4 updates from Ted Ts'o:
 "Mostly performance and bug fixes, plus some cleanups.  The one new
  feature this merge window is a new ioctl EXT4_IOC_SWAP_BOOT which
  allows installation of a hidden inode designed for boot loaders."

* tag 'ext4_for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tytso/ext4: (50 commits)
  ext4: fix type-widening bug in inode table readahead code
  ext4: add check for inodes_count overflow in new resize ioctl
  ext4: fix Kconfig documentation for CONFIG_EXT4_DEBUG
  ext4: fix online resizing for ext3-compat file systems
  jbd2: trace when lock_buffer in do_get_write_access takes a long time
  ext4: mark metadata blocks using bh flags
  buffer: add BH_Prio and BH_Meta flags
  ext4: mark all metadata I/O with REQ_META
  ext4: fix readdir error in case inline_data+^dir_index.
  ext4: fix readdir error in the case of inline_data+dir_index
  jbd2: use kmem_cache_zalloc instead of kmem_cache_alloc/memset
  ext4: mext_insert_extents should update extent block checksum
  ext4: move quota initialization out of inode allocation transaction
  ext4: reserve xattr index for Rich ACL support
  jbd2: reduce journal_head size
  ext4: clear buffer_uninit flag when submitting IO
  ext4: use io_end for multiple bios
  ext4: make ext4_bio_write_page() use BH_Async_Write flags
  ext4: Use kstrtoul() instead of parse_strtoul()
  ext4: defragmentation code cleanup
  ...
2013-05-01 08:04:12 -07:00
Linus Torvalds
b0ca4d0123 3.10 dma-buf updates
-----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1.4.12 (GNU/Linux)
 
 iQIcBAABAgAGBQJRgQj+AAoJEAG+/NWsLn5b+ZgP/2To0IUb3hyvcfCEnGytzOu1
 zIkDMyqTYtPy3r+gdZ7LkZYJkJ0NrO9DwWUDcPEY/9mBL0kwOBIgWumUi2ABESQI
 /tuTYzG0MKo5UNRr/aSK5erLSZeSbRht4DE8oiK9mw1O6/mUq5c4J/V+z6AYWC8m
 4cad8+XGU6x1g0a13VIhO8p2OcfV/ZyGwOeHN4SMZhkoHt3Le8wVXYMUDDofmCKs
 lkkCetleQFcHLtwIMRC3SX15mIF57LhicK5JfmvEYvI4ghVu5eKi5B59aGdhG0R9
 p7cKZ10TgDbr2I8gvQNZ/jSKhMCxLeGHeA781YWqJRU/YdvoYltVVOeWH60YZJdF
 K/dwaPsVkE68xVpTO5xieCTY9nSk1q258gQLgPkgThQYuRuPgWwAN/GbaFYGJjew
 vGmpp2F7VZxrVjBO89Rqgp47KWZE9ibEpc4Gb6pg7thqRl/joi6sosP0zGUApnFT
 fJSKMvXgNGDk6r8b5F2EYKTeLAHDuQ+FqzZeZ1+B/IOEwmyUosqslUlhmEVXqawW
 CUMO0MkimIfv7stPcgx/v7e3lQDz/8EGI8NBjJIipkeF+Fo39bUt4MA56SVp/lBC
 YyPokFjZGcqtADKHxZzNYTpGx/9NTMCDR510qhqITWQXd+e6ygbmLUgwdOOiBlQM
 gQDdPh29sPULSad37tbr
 =VZTM
 -----END PGP SIGNATURE-----

Merge tag 'tag-for-linus-3.10' of git://git.linaro.org/people/sumitsemwal/linux-dma-buf

Pull dma-buf updates from Sumit Semwal:
 "Added debugfs support to dma-buf"

* tag 'tag-for-linus-3.10' of git://git.linaro.org/people/sumitsemwal/linux-dma-buf:
  dma-buf: Add debugfs support
  dma-buf: replace dma_buf_export() with dma_buf_export_named()
2013-05-01 07:44:37 -07:00
Linus Torvalds
d70b1e06eb Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/rkuo/linux-hexagon-kernel
Pull Hexagon fixes from Richard Kuo:
 "Changes for the Hexagon architecture (and one touching OpenRISC).

  They include various fixes to make use of additional arch features and
  cleanups.  The largest functional change is a cleanup of the signal
  and event return paths"

* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/rkuo/linux-hexagon-kernel: (32 commits)
  Hexagon: add v4 CS regs to core copyout macro
  Hexagon: use correct translation for VMALLOC_START
  Hexagon: use correct translations for DMA mappings
  Hexagon: fix return value for notify_resume case in do_work_pending
  Hexagon: fix signal number for user mem faults
  Hexagon: remove two Kconfig entries
  arch: remove CONFIG_GENERIC_FIND_NEXT_BIT again
  Hexagon: update copyright dates
  Hexagon: add translation types for __vmnewmap
  Hexagon: fix signal.c compile error
  Hexagon: break up user fn/arg register setting
  Hexagon: use generic sys_fork, sys_vfork, and sys_clone
  Hexagon: fix psp/sp macro
  Hexagon: fix up int enable/disable at ret_from_fork
  Hexagon: add IOMEM and _relaxed IO macros
  Hexagon: switch to using the device type for IO mappings
  Hexagon: don't print info for offline CPU's
  Hexagon: add support for single-stepping (v4+)
  Hexagon: use correct work mask when checking for more work
  Hexagon: add support for additional exceptions
  ...
2013-05-01 07:43:05 -07:00
Linus Torvalds
b0b885657b tty: fix up atime/mtime mess, take three
We first tried to avoid updating atime/mtime entirely (commit
b0de59b573: "TTY: do not update atime/mtime on read/write"), and then
limited it to only update it occasionally (commit 37b7f3c765: "TTY:
fix atime/mtime regression"), but it turns out that this was both
insufficient and overkill.

It was insufficient because we let people attach to the shared ptmx node
to see activity without even reading atime/mtime, and it was overkill
because the "only once a minute" means that you can't really tell an
idle person from an active one with 'w'.

So this tries to fix the problem properly.  It marks the shared ptmx
node as un-notifiable, and it lowers the "only once a minute" to a few
seconds instead - still long enough that you can't time individual
keystrokes, but short enough that you can tell whether somebody is
active or not.

Reported-by: Simon Kirby <sim@hostway.ca>
Acked-by: Jiri Slaby <jslaby@suse.cz>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: stable@vger.kernel.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-05-01 07:32:21 -07:00
Linus Torvalds
08d7676083 Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/signal
Pull compat cleanup from Al Viro:
 "Mostly about syscall wrappers this time; there will be another pile
  with patches in the same general area from various people, but I'd
  rather push those after both that and vfs.git pile are in."

* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/signal:
  syscalls.h: slightly reduce the jungles of macros
  get rid of union semop in sys_semctl(2) arguments
  make do_mremap() static
  sparc: no need to sign-extend in sync_file_range() wrapper
  ppc compat wrappers for add_key(2) and request_key(2) are pointless
  x86: trim sys_ia32.h
  x86: sys32_kill and sys32_mprotect are pointless
  get rid of compat_sys_semctl() and friends in case of ARCH_WANT_OLD_COMPAT_IPC
  merge compat sys_ipc instances
  consolidate compat lookup_dcookie()
  convert vmsplice to COMPAT_SYSCALL_DEFINE
  switch getrusage() to COMPAT_SYSCALL_DEFINE
  switch epoll_pwait to COMPAT_SYSCALL_DEFINE
  convert sendfile{,64} to COMPAT_SYSCALL_DEFINE
  switch signalfd{,4}() to COMPAT_SYSCALL_DEFINE
  make SYSCALL_DEFINE<n>-generated wrappers do asmlinkage_protect
  make HAVE_SYSCALL_WRAPPERS unconditional
  consolidate cond_syscall and SYSCALL_ALIAS declarations
  teach SYSCALL_DEFINE<n> how to deal with long long/unsigned long long
  get rid of duplicate logics in __SC_....[1-6] definitions
2013-05-01 07:21:43 -07:00
Sumit Semwal
b89e35636b dma-buf: Add debugfs support
Add debugfs support to make it easier to print debug information
about the dma-buf buffers.

Cc: Dave Airlie <airlied@redhat.com>
 [minor fixes on init and warning fix]
Cc: Dan Carpenter <dan.carpenter@oracle.com>
 [remove double unlock in fail case]
Signed-off-by: Sumit Semwal <sumit.semwal@linaro.org>
2013-05-01 16:36:22 +05:30
Sumit Semwal
78df969550 dma-buf: replace dma_buf_export() with dma_buf_export_named()
For debugging purposes, it is useful to have a name-string added
while exporting buffers. Hence, dma_buf_export() is replaced with
dma_buf_export_named(), which additionally takes 'exp_name' as a
parameter.

For backward compatibility, and for lazy exporters who don't wish to
name themselves, a #define dma_buf_export() is also made available,
which adds a __FILE__ instead of 'exp_name'.

Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
  [Thanks for the idea!]
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Sumit Semwal <sumit.semwal@linaro.org>
2013-05-01 16:35:36 +05:30
Jiri Olsa
5919b30933 perf: Fix vmalloc ring buffer pages handling
If we allocate perf ring buffer with the size of single (user)
page, we will get memory corruption when releasing itin
rb_free_work function (for CONFIG_PERF_USE_VMALLOC option).

For single page sized ring buffer the page_order is -1 (because
nr_pages is 0). This needs to be recognized in the rb_free_work
function to release proper amount of pages.

Adding data_page_nr function that returns number of allocated
data pages. Customizing the rest of the code to use it.

Reported-by: Jan Stancek <jstancek@redhat.com>
Original-patch-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: Jiri Olsa <jolsa@redhat.com>
Link: http://lkml.kernel.org/r/20130319143509.GA1128@krava.brq.redhat.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-05-01 12:34:46 +02:00
Michael S. Tsirkin
150b9e51ae vhost: fix error handling in RESET_OWNER ioctl
RESET_OWNER ioctl would leave the fd in a bad state if
memory allocation failed: device is stopped
but owner is not reset. Make state changes
after allocating memory, such that a failed
ioctl has no effect.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2013-05-01 10:02:54 +03:00
Michael S. Tsirkin
061b16cfe3 tcm_vhost: remove virtio-net.h dependency
vhost.h only has generic bits now, so we can drop it
virtio-net.h in tcm_vhost.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2013-05-01 10:02:53 +03:00
Michael S. Tsirkin
81f95a5580 vhost: move per-vq net specific fields out to net
This will remove the need for vhost scsi to pull
in virtio-net.h.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2013-05-01 10:02:53 +03:00
Michael S. Tsirkin
3dfbff328f tcm_vhost: document inflight ref-counting use
Add more comments so we remember not to break it
next time we change things.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2013-05-01 10:02:52 +03:00
Asias He
2839400f8f vhost: move vhost-net zerocopy fields to net.c
On top of 'vhost: Allow device specific fields per vq', we can move device
specific fields to device virt queue from vhost virt queue.

Signed-off-by: Asias He <asias@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2013-05-01 10:02:52 +03:00
Asias He
f2f0173d6a tcm_vhost: Wait for pending requests in vhost_scsi_flush()
Unlike tcm_vhost_evt requests, tcm_vhost_cmd requests are passed to the
target core system, we can not make sure all the pending requests will
be finished by flushing the virt queue.

In this patch, we do refcount for every tcm_vhost_cmd requests to make
vhost_scsi_flush() wait for all the pending requests issued before the
flush operation to be finished.

This is useful when we call vhost_scsi_clear_endpoint() to stop
tcm_vhost. No new requests will be passed to target core system because
we clear the endpoint by setting vs_tpg to NULL. And we wait for all the
old requests. These guarantee no requests will be leaked and existing
requests will be completed.

Signed-off-by: Asias He <asias@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2013-05-01 10:02:51 +03:00
Asias He
3ab2e420ec vhost: Allow device specific fields per vq
This is useful for any device who wants device specific fields per vq.
For example, tcm_vhost wants a per vq field to track requests which are
in flight on the vq. Also, on top of this we can add patches to move
things like ubufs from vhost.h out to net.c.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Asias He <asias@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2013-05-01 10:02:45 +03:00
Michael S. Tsirkin
bc7562355f Merge branch 'for-next-merge' of git://git.kernel.org/pub/scm/linux/kernel/git/nab/target-pending into vhost-net-next 2013-05-01 09:16:50 +03:00
Dave Airlie
b11b88ef0e drm/i915: fix dmabuf vmap support
Sometimes that extra semicolon can really be hard to spot.

Acked-by: Imre Deak <imre.deak@intel.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
2013-05-01 16:09:31 +10:00
Imre Deak
98b76231d7 drm/prime: warn for non-empty handle lookup list during drm file release
drm_gem_release should release all handles connected to the drm file and
so should also release the prime lookup entries of these handles. So
just WARN if this isn't the case.

Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@redhat.com>
2013-05-01 16:08:18 +10:00
Sjur Brændeland
01d779a14e caif_virtio: Remove bouncing email addresses
Remove our (soon to be) bouncing email addresses,
and update Dmitri's address.
Dmitry will take over as maintainer for CAIF from now on.

Cc: Vikram Arv <vikram.arv@stericsson.com>
Cc: Dmitry Tarnyagin <dmitry.tarnyagin@stericsson.com>
Cc: Dmitry Tarnyagin <dmitry.tarnyagin@lockless.no>
Signed-off-by: Sjur Brændeland <sjur.brandeland@stericsson.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Acked-by: Dmity Tarnyagin <dmitry.tarnyagin@lockless.no>
2013-05-01 11:59:15 +09:30
Richard Kuo
426d29ccb2 Hexagon: add v4 CS regs to core copyout macro
Signed-off-by: Richard Kuo <rkuo@codeaurora.org>
2013-04-30 19:40:29 -05:00
Richard Kuo
5c883b462a Hexagon: use correct translation for VMALLOC_START
Signed-off-by: Richard Kuo <rkuo@codeaurora.org>
2013-04-30 19:40:29 -05:00
Richard Kuo
5e1150542f Hexagon: use correct translations for DMA mappings
With physical offsets, pa<->va translations aren't just based
on PAGE_OFFSET anymore.

Signed-off-by: Richard Kuo <rkuo@codeaurora.org>
2013-04-30 19:40:28 -05:00
Richard Kuo
c710f59087 Hexagon: fix return value for notify_resume case in do_work_pending
Signed-off-by: Richard Kuo <rkuo@codeaurora.org>
2013-04-30 19:40:27 -05:00
Richard Kuo
610208bc8d Hexagon: fix signal number for user mem faults
Signed-off-by: Richard Kuo <rkuo@codeaurora.org>
2013-04-30 19:40:27 -05:00
Paul Bolle
41929798bb Hexagon: remove two Kconfig entries
The Kconfig entries for HEXAGON_VM and HEXAGON_ANGEL_TRAPS were added,
together with the configuration and makefiles for the Hexagon
architecture, in v3.2. They have never been used. They can safely be
removed.

Signed-off-by: Paul Bolle <pebolle@tiscali.nl>
[rkuo@codeaurora.org: adjust for line changes in Kconfig]
Signed-off-by: Richard Kuo <rkuo@codeaurora.org>
2013-04-30 19:40:27 -05:00
Paul Bolle
65af3a3f89 arch: remove CONFIG_GENERIC_FIND_NEXT_BIT again
CONFIG_GENERIC_FIND_NEXT_BIT was removed in v3.0, but reappeared in two
architectures. Remove it again.

Signed-off-by: Paul Bolle <pebolle@tiscali.nl>
Acked-by: Jonas Bonn <jonas@southpole.se>
Signed-off-by: Richard Kuo <rkuo@codeaurora.org>
2013-04-30 19:40:27 -05:00
Richard Kuo
7c6a5df44f Hexagon: update copyright dates
Signed-off-by: Richard Kuo <rkuo@codeaurora.org>
2013-04-30 19:40:27 -05:00