1. 29 Mar, 2015 1 commit
  2. 04 Mar, 2015 1 commit
  3. 27 Feb, 2015 1 commit
  4. 23 Feb, 2015 1 commit
    • Jiri Pirko's avatar
      team: fix possible null pointer dereference in team_handle_frame · 57e59563
      Jiri Pirko authored
      Currently following race is possible in team:
      CPU0                                        CPU1
                                                      priv_flags &= ~IFF_TEAM_PORT
            priv_flags & IFF_TEAM_PORT == 0
          return NULL (instead of port got
                       from rx_handler_data)
      The thing is that the flag is removed before rx_handler is unregistered.
      If team_handle_frame is called in between, team_port_exists returns 0
      and team_port_get_rcu will return NULL.
      So do not check the flag here. It is guaranteed by netdev_rx_handler_unregister
      that team_handle_frame will always see valid rx_handler_data pointer.
      Signed-off-by: default avatarJiri Pirko <jiri@resnulli.us>
      Fixes: 3d249d4c
       ("net: introduce ethernet teaming device")
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
  5. 02 Feb, 2015 1 commit
  6. 14 Jan, 2015 1 commit
    • Jiri Pirko's avatar
      team: avoid possible underflow of count_pending value for notify_peers and mcast_rejoin · b0d11b42
      Jiri Pirko authored
      This patch is fixing a race condition that may cause setting
      count_pending to -1, which results in unwanted big bulk of arp messages
      (in case of "notify peers").
      Consider following scenario:
      count_pending == 2
         CPU0                                           CPU1
      					  atomic_dec_and_test (dec count_pending to 1)
         atomic_add (adding 1 to count_pending)
      					  atomic_dec_and_test (dec count_pending to 1)
      					  atomic_dec_and_test (dec count_pending to 0)
      					  atomic_dec_and_test (dec count_pending to -1)
      Fix this race by using atomic_dec_if_positive - that will prevent
      count_pending running under 0.
      Fixes: fc423ff0 ("team: add peer notification")
      Fixes: 492b200e
        ("team: add support for sending multicast rejoins")
      Signed-off-by: default avatarJiri Pirko <jiri@resnulli.us>
      Signed-off-by: default avatarJiri Benc <jbenc@redhat.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
  7. 12 Jan, 2015 1 commit
  8. 13 Nov, 2014 1 commit
    • Michal Kubeček's avatar
      net: generic dev_disable_lro() stacked device handling · fbe168ba
      Michal Kubeček authored
      Large receive offloading is known to cause problems if received packets
      are passed to other host. Therefore the kernel disables it by calling
      dev_disable_lro() whenever a network device is enslaved in a bridge or
      forwarding is enabled for it (or globally). For virtual devices we need
      to disable LRO on the underlying physical device (which is actually
      receiving the packets).
      Current dev_disable_lro() code handles this  propagation for a vlan
      (including 802.1ad nested vlan), macvlan or a vlan on top of a macvlan.
      It doesn't handle other stacked devices and their combinations, in
      particular propagation from a bond to its slaves which often causes
      problems in virtualization setups.
      As we now have generic data structures describing the upper-lower device
      relationship, dev_disable_lro() can be generalized to disable LRO also
      for all lower devices (if any) once it is disabled for the device
      For bonding and teaming devices, it is necessary to disable LRO not only
      on current slaves at the moment when dev_disable_lro() is called but
      also on any slave (port) added later.
      v2: use lower device links for all devices (including vlan and macvlan)
      Signed-off-by: default avatarMichal Kubecek <mkubecek@suse.cz>
      Acked-by: default avatarVeaceslav Falico <vfalico@gmail.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
  9. 07 Oct, 2014 1 commit
    • Eric Dumazet's avatar
      net: better IFF_XMIT_DST_RELEASE support · 02875878
      Eric Dumazet authored
      Testing xmit_more support with netperf and connected UDP sockets,
      I found strange dst refcount false sharing.
      Current handling of IFF_XMIT_DST_RELEASE is not optimal.
      Dropping dst in validate_xmit_skb() is certainly too late in case
      packet was queued by cpu X but dequeued by cpu Y
      The logical point to take care of drop/force is in __dev_queue_xmit()
      before even taking qdisc lock.
      As Julian Anastasov pointed out, need for skb_dst() might come from some
      packet schedulers or classifiers.
      This patch adds new helper to cleanly express needs of various drivers
      or qdiscs/classifiers.
      Drivers that need skb_dst() in their ndo_start_xmit() should call
      following helper in their setup instead of the prior :
      	dev->priv_flags &= ~IFF_XMIT_DST_RELEASE;
      Instead of using a single bit, we use two bits, one being
      eventually rebuilt in bonding/team drivers.
      The other one, is permanent and blocks IFF_XMIT_DST_RELEASE being
      rebuilt in bonding/team. Eventually, we could add something
      smarter later.
      Signed-off-by: default avatarEric Dumazet <edumazet@google.com>
      Cc: Julian Anastasov <ja@ssi.bg>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
  10. 05 Oct, 2014 1 commit
    • Joe Lawrence's avatar
      team: avoid race condition in scheduling delayed work · 47549650
      Joe Lawrence authored
      When team_notify_peers and team_mcast_rejoin are called, they both reset
      their respective .count_pending atomic variable. Then when the actual
      worker function is executed, the variable is atomically decremented.
      This pattern introduces a potential race condition where the
      .count_pending rolls over and the worker function keeps rescheduling
      until .count_pending decrements to zero again:
      THREAD 1                           THREAD 2
      ========                           ========
        atomic_set count_pending = 1
                                         atomic_set count_pending = 1
          count_pending = 0
                                           count_pending = -1
                                         (repeat until count_pending = 0)
      Instead of assigning a new value to .count_pending, use atomic_add to
      tack-on the additional desired worker function invocations.
      Signed-off-by: default avatarJoe Lawrence <joe.lawrence@stratus.com>
      Acked-by: default avatarJiri Pirko <jiri@resnulli.us>
      Fixes: fc423ff0 ("team: add peer notification")
      Fixes: 492b200e
       ("team: add support for sending multicast rejoins")
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
  11. 26 Aug, 2014 1 commit
  12. 05 Aug, 2014 1 commit
  13. 02 Jun, 2014 1 commit
    • Jiri Pirko's avatar
      team: fix mtu setting · 9d0d68fa
      Jiri Pirko authored
      Now it is not possible to set mtu to team device which has a port
      enslaved to it. The reason is that when team_change_mtu() calls
      dev_set_mtu() for port device, notificator for NETDEV_PRECHANGEMTU
      event is called and team_device_event() returns NOTIFY_BAD forbidding
      the change. So fix this by returning NOTIFY_DONE here in case team is
      changing mtu in team_change_mtu().
      Introduced-by: 3d249d4c
       "net: introduce ethernet teaming device"
      Signed-off-by: default avatarJiri Pirko <jiri@resnulli.us>
      Acked-by: default avatarFlavio Leitner <fbl@redhat.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
  14. 22 May, 2014 1 commit
    • Michal Kubeček's avatar
      teaming: fix vlan_features computing · 3625920b
      Michal Kubeček authored
      __team_compute_features() uses netdev_increment_features() to
      combine vlan_features of slaves into vlan_features of the team.
      As netdev_increment_features() only adds most features and we
      start with TEAM_VLAN_FEATURES, we can end up with features none
      of the slaves provided.
      Initialize vlan_features only with the flags which are both in
      TEAM_VLAN_FEATURES and NETIF_F_ALL_FOR_ALL. Right now there is
      no such feature so that we actually initialize vlan_features
      with zero but stating it explicitely will make the code more
      future proof.
      Signed-off-by: default avatarMichal Kubecek <mkubecek@suse.cz>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
  15. 24 Apr, 2014 1 commit
  16. 29 Mar, 2014 1 commit
  17. 15 Mar, 2014 1 commit
  18. 17 Feb, 2014 1 commit
  19. 14 Feb, 2014 1 commit
  20. 23 Jan, 2014 1 commit
  21. 17 Jan, 2014 1 commit
  22. 10 Jan, 2014 1 commit
    • Jason Wang's avatar
      net: core: explicitly select a txq before doing l2 forwarding · f663dd9a
      Jason Wang authored
      Currently, the tx queue were selected implicitly in ndo_dfwd_start_xmit(). The
      will cause several issues:
      - NETIF_F_LLTX were removed for macvlan, so txq lock were done for macvlan
        instead of lower device which misses the necessary txq synchronization for
        lower device such as txq stopping or frozen required by dev watchdog or
        control path.
      - dev_hard_start_xmit() was called with NULL txq which bypasses the net device
      - dev_hard_start_xmit() does not check txq everywhere which will lead a crash
        when tso is disabled for lower device.
      Fix this by explicitly introducing a new param for .ndo_select_queue() for just
      selecting queues in the case of l2 forwarding offload. netdev_pick_tx() was also
      extended to accept this parameter and dev_queue_xmit_accel() was used to do l2
      forwarding transmission.
      With this fixes, NETIF_F_LLTX could be preserved for macvlan and there's no need
      to check txq against NULL in dev_hard_start_xmit(). Also there's no need to keep
      a dedicated ndo_dfwd_start_xmit() and we can just reuse the code of
      dev_queue_xmit() to do the transmission.
      In the future, it was also required for macvtap l2 forwarding support since it
      provides a necessary synchronization method.
      Cc: John Fastabend <john.r.fastabend@intel.com>
      Cc: Neil Horman <nhorman@tuxdriver.com>
      Cc: e1000-devel@lists.sourceforge.net
      Signed-off-by: default avatarJason Wang <jasowang@redhat.com>
      Acked-by: default avatarNeil Horman <nhorman@tuxdriver.com>
      Acked-by: default avatarJohn Fastabend <john.r.fastabend@intel.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
  23. 29 Nov, 2013 1 commit
  24. 19 Nov, 2013 3 commits
  25. 14 Nov, 2013 1 commit
  26. 06 Nov, 2013 1 commit
    • John Stultz's avatar
      net: Explicitly initialize u64_stats_sync structures for lockdep · 827da44c
      John Stultz authored
      In order to enable lockdep on seqcount/seqlock structures, we
      must explicitly initialize any locks.
      The u64_stats_sync structure, uses a seqcount, and thus we need
      to introduce a u64_stats_init() function and use it to initialize
      the structure.
      This unfortunately adds a lot of fairly trivial initialization code
      to a number of drivers. But the benefit of ensuring correctness makes
      this worth while.
      Because these changes are required for lockdep to be enabled, and the
      changes are quite trivial, I've not yet split this patch out into 30-some
      separate patches, as I figured it would be better to get the various
      maintainers thoughts on how to best merge this change along with
      the seqcount lockdep enablement.
      Feedback would be appreciated!
      Signed-off-by: default avatarJohn Stultz <john.stultz@linaro.org>
      Acked-by: default avatarJulian Anastasov <ja@ssi.bg>
      Signed-off-by: default avatarPeter Zijlstra <peterz@infradead.org>
      Cc: Alexey Kuznetsov <kuznet@ms2.inr.ac.ru>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Eric Dumazet <eric.dumazet@gmail.com>
      Cc: Hideaki YOSHIFUJI <yoshfuji@linux-ipv6.org>
      Cc: James Morris <jmorris@namei.org>
      Cc: Jesse Gross <jesse@nicira.com>
      Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
      Cc: "Michael S. Tsirkin" <mst@redhat.com>
      Cc: Mirko Lindner <mlindner@marvell.com>
      Cc: Patrick McHardy <kaber@trash.net>
      Cc: Roger Luethi <rl@hellgate.ch>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: Simon Horman <horms@verge.net.au>
      Cc: Stephen Hemminger <stephen@networkplumber.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
      Cc: Wensong Zhang <wensong@linux-vs.org>
      Cc: netdev@vger.kernel.org
      Link: http://lkml.kernel.org/r/1381186321-4906-2-git-send-email-john.stultz@linaro.org
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
  27. 04 Sep, 2013 1 commit
  28. 26 Jul, 2013 1 commit
  29. 23 Jul, 2013 3 commits
  30. 12 Jun, 2013 4 commits
  31. 01 Jun, 2013 1 commit
  32. 28 May, 2013 1 commit
  33. 19 Apr, 2013 1 commit