Commit Graph

652 Commits

Author SHA1 Message Date
Dave Airlie
4300a0f8bd Merge tag 'v3.10-rc7' into drm-next
Linux 3.10-rc7

The sdvo lvds fix in this -fixes pull

commit c3456fb3e4
Author: Daniel Vetter <daniel.vetter@ffwll.ch>
Date:   Mon Jun 10 09:47:58 2013 +0200

    drm/i915: prefer VBT modes for SVDO-LVDS over EDID

has a silent functional conflict with

commit 990256aec2
Author: Ville Syrjälä <ville.syrjala@linux.intel.com>
Date:   Fri May 31 12:17:07 2013 +0000

    drm: Add probed modes in probe order

in drm-next. W simply need to add the vbt modes before edid modes, i.e. the
other way round than now.

Conflicts:
	drivers/gpu/drm/drm_prime.c
	drivers/gpu/drm/i915/intel_sdvo.c
2013-06-27 20:40:44 +10:00
Jiri Kosina
701ba533f6 Input: make gamepad API keycodes more clear
Shuffle the defines around so that it is clear that BTN_A, BTN_B, etc are
legacy definitions and not an accidental typos that need their own key codes.

Suggested-by: Dmitry Torokhov <dmitry.torokhov@gmail.com>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
2013-06-27 11:54:51 +02:00
David Herrmann
d09bbfd2a8 input: document gamepad API and add extra keycodes
Until today all gamepad input drivers report their data differently. It is
nearly impossible to write applications for more than one device in a
generic way. Therefore, this patch introduces a uniform gamepad API which
will be used for all new drivers.

Instead of mapping buttons by their labels, we now map them by position.
This allows applications to work with any gamepad regardless of the labels
on the buttons. Furthermore, we standardize the ABS_* codes for analog
triggers and sticks.

For D-Pads the long overdue BTN_DPAD_* codes are introduced. They should
be fairly obvious how to use. To avoid confusion, the action buttons now
have BTN_EAST/SOUTH/WEST/NORTH aliases.

Reported-by: Todd Showalter <todd@electronjump.com>
Acked-by: Dmitry Torokhov <dmitry.torokhov@gmail.com>
Signed-off-by: David Herrmann <dh.herrmann@gmail.com>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
2013-06-27 11:51:31 +02:00
David S. Miller
a77471ff70 Merge branch 'master' of git://git.kernel.org/pub/scm/linux/kernel/git/klassert/ipsec-next
Steffen Klassert says:

====================
Just one patch this time.

1) Drop packets when the matching SA is in larval state and add a
   statistic counter for that. From Fan Du.

Please pull or let me know if there are problems.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2013-06-26 13:23:13 -07:00
Alexander Frolkin
eba3b5a787 ipvs: SH fallback and L4 hashing
By default the SH scheduler rejects connections that are hashed onto a
realserver of weight 0.  This patch adds a flag to make SH choose a
different realserver in this case, instead of rejecting the connection.

The patch also adds a flag to make SH include the source port (TCP, UDP,
SCTP) in the hash as well as the source address.  This basically allows
for deterministic round-robin load balancing (i.e., where any director
in a cluster of directors with identical config will send the same
packet the same way).

The flags are service flags (IP_VS_SVC_F_SCHED*) so that these options
can be set per service.  They are set using a new option to ipvsadm.

Signed-off-by: Alexander Frolkin <avf@eldamar.org.uk>
Acked-by: Julian Anastasov <ja@ssi.bg>
Signed-off-by: Simon Horman <horms@verge.net.au>
2013-06-26 18:01:46 +09:00
Eliezer Tamir
2d48d67fa8 net: poll/select low latency socket support
select/poll busy-poll support.

Split sysctl value into two separate ones, one for read and one for poll.
updated Documentation/sysctl/net.txt

Add a new poll flag POLL_LL. When this flag is set, sock_poll will call
sk_poll_ll if possible. sock_poll sets this flag in its return value
to indicate to select/poll when a socket that can busy poll is found.

When poll/select have nothing to report, call the low-level
sock_poll again until we are out of time or we find something.

Once the system call finds something, it stops setting POLL_LL, so it can
return the result to the user ASAP.

Signed-off-by: Eliezer Tamir <eliezer.tamir@linux.intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2013-06-25 16:35:52 -07:00
H. Peter Anvin
2fc016c5bd linux/const.h: Add _BITUL() and _BITULL()
Add macros for single bit definitions of a specific type.  These are
similar to the BIT() macro that already exists, but with a few
exceptions:

1. The namespace is such that they can be used in uapi definitions.
2. The type is set with the _AC() macro to allow it to be used in
   assembly.
3. The type is explicitly specified to be UL or ULL.

Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
Link: http://lkml.kernel.org/n/tip-nbca8p7cg6jyjoit7klh3o91@git.kernel.org
2013-06-25 15:50:04 -07:00
Daniel Borkmann
77e2af0312 net: if_arp: add ARPHRD_NETLINK type
This small patch adds the definition of ARPHRD_NETLINK which can for
example be used by netlink monitoring devices as device type. So that
sockaddr_ll can pick it up and based on that choose the correct packet
dissector.

Signed-off-by: Daniel Borkmann <dborkman@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2013-06-24 16:39:05 -07:00
John W. Linville
66ba271ab9 Merge branch 'for-john' of git://git.kernel.org/pub/scm/linux/kernel/git/jberg/mac80211-next 2013-06-24 14:44:59 -04:00
Emil Goode
a191e48d44 drm/tegra: Include header drm/drm.h
Include definitions of used types by including drm/drm.h

Sparse output:
/usr/include/drm/tegra_drm.h:21:
	found __[us]{8,16,32,64} type without
	#include <linux/types.h>

Signed-off-by: Emil Goode <emilgoode@gmail.com>
Signed-off-by: Thierry Reding <thierry.reding@gmail.com>
2013-06-22 12:43:50 +02:00
John W. Linville
7d2a47aab2 Merge branch 'master' of git://git.kernel.org/pub/scm/linux/kernel/git/linville/wireless-next into for-davem
Conflicts:
	net/wireless/nl80211.c
2013-06-21 15:42:30 -04:00
Alex Williamson
166fd7d94a vfio: hugepage support for vfio_iommu_type1
We currently send all mappings to the iommu in PAGE_SIZE chunks,
which prevents the iommu from enabling support for larger page sizes.
We still need to pin pages, which means we step through them in
PAGE_SIZE chunks, but we can batch up contiguous physical memory
chunks to allow the iommu the opportunity to use larger pages.  The
approach here is a bit different that the one currently used for
legacy KVM device assignment.  Rather than looking at the vma page
size and using that as the maximum size to pass to the iommu, we
instead simply look at whether the next page is physically
contiguous.  This means we might ask the iommu to map a 4MB region,
while legacy KVM might limit itself to a maximum of 2MB.

Splitting our mapping path also allows us to be smarter about locked
memory because we can more easily unwind if the user attempts to
exceed the limit.  Therefore, rather than assuming that a mapping
will result in locked memory, we test each page as it is pinned to
determine whether it locks RAM vs an mmap'd MMIO region.  This should
result in better locking granularity and less locked page fudge
factors in userspace.

The unmap path uses the same algorithm as legacy KVM.  We don't want
to track the pfn for each mapping ourselves, but we need the pfn in
order to unpin pages.  We therefore ask the iommu for the iova to
physical address translation, ask it to unpin a page, and see how many
pages were actually unpinned.  iommus supporting large pages will
often return something bigger than a page here, which we know will be
physically contiguous and we can unpin a batch of pfns.  iommus not
supporting large mappings won't see an improvement in batching here as
they only unmap a page at a time.

With this change, we also make a clarification to the API for mapping
and unmapping DMA.  We can only guarantee unmaps at the same
granularity as used for the original mapping.  In other words,
unmapping a subregion of a previous mapping is not guaranteed and may
result in a larger or smaller unmapping than requested.  The size
field in the unmapping structure is updated to reflect this.
Previously this was unmodified on mapping, always returning the the
requested unmap size.  This is now updated to return the actual unmap
size on success, allowing userspace to appropriately track mappings.

Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
2013-06-21 09:38:02 -06:00
Hans Verkuil
b71c99801e [media] v4l2-core: remove support for obsolete VIDIOC_DBG_G_CHIP_IDENT
This has been replaced by the new and much better VIDIOC_DBG_G_CHIP_INFO.

Signed-off-by: Hans Verkuil <hans.verkuil@cisco.com>
Signed-off-by: Mauro Carvalho Chehab <mchehab@redhat.com>
2013-06-21 10:46:44 -03:00
Sean Hefty
5bc2b7b397 RDMA/ucma: Allow user space to specify AF_IB when joining multicast
Allow user space applications to join multicast groups using MGIDs
directly.  MGIDs may be passed using AF_IB addresses.  Since the
current multicast join command only supports addresses as large as
sockaddr_in6, define a new structure for joining addresses specified
using sockaddr_ib.

Since AF_IB allows the user to specify the qkey when resolving a
remote UD QP address, when joining the multicast group use the qkey
value, if one has been assigned.

Signed-off-by: Sean Hefty <sean.hefty@intel.com>
Signed-off-by: Roland Dreier <roland@purestorage.com>
2013-06-20 23:35:45 -07:00
Sean Hefty
209cf2a751 RDMA/ucma: Allow user space to pass AF_IB into resolve
Allow user space applications to call resolve_addr using AF_IB.  To
support sockaddr_ib, we need to define a new structure capable of
handling the larger address size.

Signed-off-by: Sean Hefty <sean.hefty@intel.com>
Signed-off-by: Roland Dreier <roland@purestorage.com>
2013-06-20 23:35:44 -07:00
Sean Hefty
eebe4c3a62 RDMA/ucma: Allow user space to bind to AF_IB
Support user space binding to addresses using AF_IB.  Since
sockaddr_ib is larger than sockaddr_in6, we need to define a larger
structure when binding using AF_IB.  This time we use sockaddr_storage
to cover future cases.

Signed-off-by: Sean Hefty <sean.hefty@intel.com>
Signed-off-by: Roland Dreier <roland@purestorage.com>
2013-06-20 23:35:43 -07:00
Sean Hefty
05ad94577e RDMA/ucma: Name changes to indicate only IP addresses supported
Several commands into the RDMA CM from user space are restricted to
supporting addresses which fit into a sockaddr_in6 structure: bind
address, resolve address, and join multicast.

With the addition of AF_IB, we need to support addresses which are
larger than sockaddr_in6.  This will be done by adding new commands
that exchange address information using sockaddr_storage.  However, to
support existing applications, we maintain the current commands and
structures, but rename them to indicate that they only support IPv4
and v6 addresses.

Signed-off-by: Sean Hefty <sean.hefty@intel.com>
Signed-off-by: Roland Dreier <roland@purestorage.com>
2013-06-20 23:35:42 -07:00
Sean Hefty
edaa7a5578 RDMA/ucma: Add ability to query GID addresses
Part of address resolution is mapping IP addresses to IB GIDs.  With
the changes to support querying larger addresses and more path records,
also provide a way to query IB GIDs after resolution completes.

Signed-off-by: Sean Hefty <sean.hefty@intel.com>
Signed-off-by: Roland Dreier <roland@purestorage.com>
2013-06-20 23:35:42 -07:00
Sean Hefty
ac53b264b2 RDMA/ucma: Support querying when IB paths are not reversible
The current query_route call can return up to two path records.  The
assumption being that one is the primary path, with optional support
for an alternate path.  In both cases, the paths are assumed to be
reversible and are used to send CM MADs.

With the ability to manually set IB path data, the rdma cm can
eventually be capable of using up to 6 paths per connection:

	forward primary, reverse primary,
	forward alternate, reverse alternate,
	reversible primary path for CM MADs
	reversible alternate path for CM MADs.

(It is unclear at this time if IB routing will complicate this)  In
order to handle more flexible routing topologies, add a new command to
report any number of paths.

Signed-off-by: Sean Hefty <sean.hefty@intel.com>
Signed-off-by: Roland Dreier <roland@purestorage.com>
2013-06-20 23:35:40 -07:00
Sean Hefty
ee7aed4528 RDMA/ucma: Support querying for AF_IB addresses
The sockaddr structure for AF_IB is larger than sockaddr_in6.  The
rdma cm user space ABI uses the latter to exchange address information
between user space and the kernel.

To support querying for larger addresses, define a new query command
that exchanges data using sockaddr_storage, rather than sockaddr_in6.
Unlike the existing query_route command, the new command only returns
address information.  Route (i.e. path record) data is separated.

Signed-off-by: Sean Hefty <sean.hefty@intel.com>
Signed-off-by: Roland Dreier <roland@purestorage.com>
2013-06-20 23:35:39 -07:00
Sean Hefty
5c438135ad RDMA/cma: Set qkey for AF_IB
Allow the user to specify the qkey when using AF_IB.  The qkey is
added to struct rdma_ucm_conn_param in place of a reserved field, but
for backwards compatability, is only accessed if the associated
rdma_cm_id is using AF_IB.

Signed-off-by: Sean Hefty <sean.hefty@intel.com>
Signed-off-by: Roland Dreier <roland@purestorage.com>
2013-06-20 23:35:37 -07:00
Eric Dumazet
681f130f39 netfilter: xt_socket: add XT_SOCKET_NOWILDCARD flag
xt_socket module can be a nice replacement to conntrack module
in some cases (SYN filtering for example)

But it lacks the ability to match the 3rd packet of TCP
handshake (ACK coming from the client).

Add a XT_SOCKET_NOWILDCARD flag to disable the wildcard mechanism.

The wildcard is the legacy socket match behavior, that ignores
LISTEN sockets bound to INADDR_ANY (or ipv6 equivalent)

iptables -I INPUT -p tcp --syn -j SYN_CHAIN
iptables -I INPUT -m socket --nowildcard -j ACCEPT

Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Patrick McHardy <kaber@trash.net>
Cc: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2013-06-20 20:28:49 +02:00
Mauro Carvalho Chehab
37c1d2e409 Merge branch 'linus' into patchwork
* linus: (1465 commits)
  ARM: tegra30: clocks: Fix pciex clock registration
  lseek(fd, n, SEEK_END) does *not* go to eof - n
  Linux 3.10-rc6
  smp.h: Use local_irq_{save,restore}() in !SMP version of on_each_cpu().
  powerpc: Fix missing/delayed calls to irq_work
  powerpc: Fix emulation of illegal instructions on PowerNV platform
  powerpc: Fix stack overflow crash in resume_kernel when ftracing
  snd_pcm_link(): fix a leak...
  use can_lookup() instead of direct checks of ->i_op->lookup
  move exit_task_namespaces() outside of exit_notify()
  fput: task_work_add() can fail if the caller has passed exit_task_work()
  xfs: don't shutdown log recovery on validation errors
  xfs: ensure btree root split sets blkno correctly
  xfs: fix implicit padding in directory and attr CRC formats
  xfs: don't emit v5 superblock warnings on write
  mei: me: clear interrupts on the resume path
  mei: nfc: fix nfc device freeing
  mei: init: Flush scheduled work before resetting the device
  sctp: fully initialize sctp_outq in sctp_outq_init
  netiucv: Hold rtnl between name allocation and device registration.
  ...
2013-06-20 05:19:09 -03:00
Alexey Kardashevskiy
5ffd229c02 powerpc/vfio: Implement IOMMU driver for VFIO
VFIO implements platform independent stuff such as
a PCI driver, BAR access (via read/write on a file descriptor
or direct mapping when possible) and IRQ signaling.

The platform dependent part includes IOMMU initialization
and handling.  This implements an IOMMU driver for VFIO
which does mapping/unmapping pages for the guest IO and
provides information about DMA window (required by a POWER
guest).

Cc: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Acked-by: Alex Williamson <alex.williamson@redhat.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2013-06-20 16:55:14 +10:00
Cong Wang
bcefe17cff tcp: introduce a per-route knob for quick ack
In previous discussions, I tried to find some reasonable heuristics
for delayed ACK, however this seems not possible, according to Eric:

	"ACKS might also be delayed because of bidirectional
	traffic, and is more controlled by the application
	response time. TCP stack can not easily estimate it."

	"ACK can be incredibly useful to recover from losses in
	a short time.

	The vast majority of TCP sessions are small lived, and we
	send one ACK per received segment anyway at beginning or
	retransmits to let the sender smoothly increase its cwnd,
	so an auto-tuning facility wont help them that much."

and according to David:

	"ACKs are the only information we have to detect loss.

	And, for the same reasons that TCP VEGAS is fundamentally
	broken, we cannot measure the pipe or some other
	receiver-side-visible piece of information to determine
	when it's "safe" to stretch ACK.

	And even if it's "safe", we should not do it so that losses are
	accurately detected and we don't spuriously retransmit.

	The only way to know when the bandwidth increases is to
	"test" it, by sending more and more packets until drops happen.
	That's why all successful congestion control algorithms must
	operate on explicited tested pieces of information.

	Similarly, it's not really possible to universally know if
	it's safe to stretch ACK or not."

It still makes sense to enable or disable quick ack mode like
what TCP_QUICK_ACK does.

Similar to TCP_QUICK_ACK option, but for people who can't
modify the source code and still wants to control
TCP delayed ACK behavior. As David suggested, this should belong
to per-path scope, since different pathes may want different
behaviors.

Cc: Eric Dumazet <eric.dumazet@gmail.com>
Cc: Rick Jones <rick.jones2@hp.com>
Cc: Stephen Hemminger <stephen@networkplumber.org>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Thomas Graf <tgraf@suug.ch>
CC: David Laight <David.Laight@ACULAB.COM>
Signed-off-by: Cong Wang <amwang@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2013-06-19 23:06:51 -07:00