You've already forked linux-apfs
mirror of
https://github.com/linux-apfs/linux-apfs.git
synced 2026-05-01 15:00:59 -07:00
Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net
Pull networking fixes from David Miller:
"A moderately sized pile of fixes, some specifically for merge window
introduced regressions although others are for longer standing items
and have been queued up for -stable.
I'm kind of tired of all the RDS protocol bugs over the years, to be
honest, it's way out of proportion to the number of people who
actually use it.
1) Fix missing range initialization in netfilter IPSET, from Jozsef
Kadlecsik.
2) ieee80211_local->tim_lock needs to use BH disabling, from Johannes
Berg.
3) Fix DMA syncing in SFC driver, from Ben Hutchings.
4) Fix regression in BOND device MAC address setting, from Jiri
Pirko.
5) Missing usb_free_urb in ISDN Hisax driver, from Marina Makienko.
6) Fix UDP checksumming in bnx2x driver for 57710 and 57711 chips,
fix from Dmitry Kravkov.
7) Missing cfgspace_lock initialization in BCMA driver.
8) Validate parameter size for SCTP assoc stats getsockopt(), from
Guenter Roeck.
9) Fix SCTP association hangs, from Lee A Roberts.
10) Fix jumbo frame handling in r8169, from Francois Romieu.
11) Fix phy_device memory leak, from Petr Malat.
12) Omit trailing FCS from frames received in BGMAC driver, from Hauke
Mehrtens.
13) Missing socket refcount release in L2TP, from Guillaume Nault.
14) sctp_endpoint_init should respect passed in gfp_t, rather than use
GFP_KERNEL unconditionally. From Dan Carpenter.
15) Add AISX AX88179 USB driver, from Freddy Xin.
16) Remove MAINTAINERS entries for drivers deleted during the merge
window, from Cesar Eduardo Barros.
17) RDS protocol can try to allocate huge amounts of memory, check
that the user's request length makes sense, from Cong Wang.
18) SCTP should use the provided KMALLOC_MAX_SIZE instead of it's own,
bogus, definition. From Cong Wang.
19) Fix deadlocks in FEC driver by moving TX reclaim into NAPI poll,
from Frank Li. Also, fix a build error introduced in the merge
window.
20) Fix bogus purging of default routes in ipv6, from Lorenzo Colitti.
21) Don't double count RTT measurements when we leave the TCP receive
fast path, from Neal Cardwell."
* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net: (61 commits)
tcp: fix double-counted receiver RTT when leaving receiver fast path
CAIF: fix sparse warning for caif_usb
rds: simplify a warning message
net: fec: fix build error in no MXC platform
net: ipv6: Don't purge default router if accept_ra=2
net: fec: put tx to napi poll function to fix dead lock
sctp: use KMALLOC_MAX_SIZE instead of its own MAX_KMALLOC_SIZE
rds: limit the size allocated by rds_message_alloc()
MAINTAINERS: remove eexpress
MAINTAINERS: remove drivers/net/wan/cycx*
MAINTAINERS: remove 3c505
caif_dev: fix sparse warnings for caif_flow_cb
ax88179_178a: ASIX AX88179_178A USB 3.0/2.0 to gigabit ethernet adapter driver
sctp: use the passed in gfp flags instead GFP_KERNEL
ipv[4|6]: correct dropwatch false positive in local_deliver_finish
l2tp: Restore socket refcount when sendmsg succeeds
net/phy: micrel: Disable asymmetric pause for KSZ9021
bgmac: omit the fcs
phy: Fix phy_device_free memory leak
bnx2x: Fix KR2 work-around condition
...
This commit is contained in:
-18
@@ -114,12 +114,6 @@ Maintainers List (try to look for most precise areas first)
|
|||||||
|
|
||||||
-----------------------------------
|
-----------------------------------
|
||||||
|
|
||||||
3C505 NETWORK DRIVER
|
|
||||||
M: Philip Blundell <philb@gnu.org>
|
|
||||||
L: netdev@vger.kernel.org
|
|
||||||
S: Maintained
|
|
||||||
F: drivers/net/ethernet/i825xx/3c505*
|
|
||||||
|
|
||||||
3C59X NETWORK DRIVER
|
3C59X NETWORK DRIVER
|
||||||
M: Steffen Klassert <klassert@mathematik.tu-chemnitz.de>
|
M: Steffen Klassert <klassert@mathematik.tu-chemnitz.de>
|
||||||
L: netdev@vger.kernel.org
|
L: netdev@vger.kernel.org
|
||||||
@@ -2361,12 +2355,6 @@ W: http://www.arm.linux.org.uk/
|
|||||||
S: Maintained
|
S: Maintained
|
||||||
F: drivers/video/cyber2000fb.*
|
F: drivers/video/cyber2000fb.*
|
||||||
|
|
||||||
CYCLADES 2X SYNC CARD DRIVER
|
|
||||||
M: Arnaldo Carvalho de Melo <acme@ghostprotocols.net>
|
|
||||||
W: http://oops.ghostprotocols.net:81/blog
|
|
||||||
S: Maintained
|
|
||||||
F: drivers/net/wan/cycx*
|
|
||||||
|
|
||||||
CYCLADES ASYNC MUX DRIVER
|
CYCLADES ASYNC MUX DRIVER
|
||||||
W: http://www.cyclades.com/
|
W: http://www.cyclades.com/
|
||||||
S: Orphan
|
S: Orphan
|
||||||
@@ -3067,12 +3055,6 @@ T: git git://git.kernel.org/pub/scm/linux/kernel/git/kristoffer/linux-hpc.git
|
|||||||
F: drivers/video/s1d13xxxfb.c
|
F: drivers/video/s1d13xxxfb.c
|
||||||
F: include/video/s1d13xxxfb.h
|
F: include/video/s1d13xxxfb.h
|
||||||
|
|
||||||
ETHEREXPRESS-16 NETWORK DRIVER
|
|
||||||
M: Philip Blundell <philb@gnu.org>
|
|
||||||
L: netdev@vger.kernel.org
|
|
||||||
S: Maintained
|
|
||||||
F: drivers/net/ethernet/i825xx/eexpress.*
|
|
||||||
|
|
||||||
ETHERNET BRIDGE
|
ETHERNET BRIDGE
|
||||||
M: Stephen Hemminger <stephen@networkplumber.org>
|
M: Stephen Hemminger <stephen@networkplumber.org>
|
||||||
L: bridge@lists.linux-foundation.org
|
L: bridge@lists.linux-foundation.org
|
||||||
|
|||||||
@@ -404,6 +404,8 @@ void bcma_core_pci_hostmode_init(struct bcma_drv_pci *pc)
|
|||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
spin_lock_init(&pc_host->cfgspace_lock);
|
||||||
|
|
||||||
pc->host_controller = pc_host;
|
pc->host_controller = pc_host;
|
||||||
pc_host->pci_controller.io_resource = &pc_host->io_resource;
|
pc_host->pci_controller.io_resource = &pc_host->io_resource;
|
||||||
pc_host->pci_controller.mem_resource = &pc_host->mem_resource;
|
pc_host->pci_controller.mem_resource = &pc_host->mem_resource;
|
||||||
|
|||||||
@@ -313,6 +313,12 @@ static void cn_proc_mcast_ctl(struct cn_msg *msg,
|
|||||||
(task_active_pid_ns(current) != &init_pid_ns))
|
(task_active_pid_ns(current) != &init_pid_ns))
|
||||||
return;
|
return;
|
||||||
|
|
||||||
|
/* Can only change if privileged. */
|
||||||
|
if (!capable(CAP_NET_ADMIN)) {
|
||||||
|
err = EPERM;
|
||||||
|
goto out;
|
||||||
|
}
|
||||||
|
|
||||||
mc_op = (enum proc_cn_mcast_op *)msg->data;
|
mc_op = (enum proc_cn_mcast_op *)msg->data;
|
||||||
switch (*mc_op) {
|
switch (*mc_op) {
|
||||||
case PROC_CN_MCAST_LISTEN:
|
case PROC_CN_MCAST_LISTEN:
|
||||||
@@ -325,6 +331,8 @@ static void cn_proc_mcast_ctl(struct cn_msg *msg,
|
|||||||
err = EINVAL;
|
err = EINVAL;
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
out:
|
||||||
cn_proc_ack(err, msg->seq, msg->ack);
|
cn_proc_ack(err, msg->seq, msg->ack);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -294,13 +294,13 @@ int st5481_setup_usb(struct st5481_adapter *adapter)
|
|||||||
// Allocate URBs and buffers for interrupt endpoint
|
// Allocate URBs and buffers for interrupt endpoint
|
||||||
urb = usb_alloc_urb(0, GFP_KERNEL);
|
urb = usb_alloc_urb(0, GFP_KERNEL);
|
||||||
if (!urb) {
|
if (!urb) {
|
||||||
return -ENOMEM;
|
goto err1;
|
||||||
}
|
}
|
||||||
intr->urb = urb;
|
intr->urb = urb;
|
||||||
|
|
||||||
buf = kmalloc(INT_PKT_SIZE, GFP_KERNEL);
|
buf = kmalloc(INT_PKT_SIZE, GFP_KERNEL);
|
||||||
if (!buf) {
|
if (!buf) {
|
||||||
return -ENOMEM;
|
goto err2;
|
||||||
}
|
}
|
||||||
|
|
||||||
endpoint = &altsetting->endpoint[EP_INT-1];
|
endpoint = &altsetting->endpoint[EP_INT-1];
|
||||||
@@ -313,6 +313,14 @@ int st5481_setup_usb(struct st5481_adapter *adapter)
|
|||||||
endpoint->desc.bInterval);
|
endpoint->desc.bInterval);
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
|
err2:
|
||||||
|
usb_free_urb(intr->urb);
|
||||||
|
intr->urb = NULL;
|
||||||
|
err1:
|
||||||
|
usb_free_urb(ctrl->urb);
|
||||||
|
ctrl->urb = NULL;
|
||||||
|
|
||||||
|
return -ENOMEM;
|
||||||
}
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
|
|||||||
@@ -1629,7 +1629,7 @@ int bond_enslave(struct net_device *bond_dev, struct net_device *slave_dev)
|
|||||||
|
|
||||||
/* If this is the first slave, then we need to set the master's hardware
|
/* If this is the first slave, then we need to set the master's hardware
|
||||||
* address to be the same as the slave's. */
|
* address to be the same as the slave's. */
|
||||||
if (bond->dev_addr_from_first)
|
if (bond->slave_cnt == 0 && bond->dev_addr_from_first)
|
||||||
bond_set_dev_addr(bond->dev, slave_dev);
|
bond_set_dev_addr(bond->dev, slave_dev);
|
||||||
|
|
||||||
new_slave = kzalloc(sizeof(struct slave), GFP_KERNEL);
|
new_slave = kzalloc(sizeof(struct slave), GFP_KERNEL);
|
||||||
|
|||||||
@@ -301,12 +301,16 @@ static int bgmac_dma_rx_read(struct bgmac *bgmac, struct bgmac_dma_ring *ring,
|
|||||||
bgmac_err(bgmac, "Found poisoned packet at slot %d, DMA issue!\n",
|
bgmac_err(bgmac, "Found poisoned packet at slot %d, DMA issue!\n",
|
||||||
ring->start);
|
ring->start);
|
||||||
} else {
|
} else {
|
||||||
|
/* Omit CRC. */
|
||||||
|
len -= ETH_FCS_LEN;
|
||||||
|
|
||||||
new_skb = netdev_alloc_skb_ip_align(bgmac->net_dev, len);
|
new_skb = netdev_alloc_skb_ip_align(bgmac->net_dev, len);
|
||||||
if (new_skb) {
|
if (new_skb) {
|
||||||
skb_put(new_skb, len);
|
skb_put(new_skb, len);
|
||||||
skb_copy_from_linear_data_offset(skb, BGMAC_RX_FRAME_OFFSET,
|
skb_copy_from_linear_data_offset(skb, BGMAC_RX_FRAME_OFFSET,
|
||||||
new_skb->data,
|
new_skb->data,
|
||||||
len);
|
len);
|
||||||
|
skb_checksum_none_assert(skb);
|
||||||
new_skb->protocol =
|
new_skb->protocol =
|
||||||
eth_type_trans(new_skb, bgmac->net_dev);
|
eth_type_trans(new_skb, bgmac->net_dev);
|
||||||
netif_receive_skb(new_skb);
|
netif_receive_skb(new_skb);
|
||||||
|
|||||||
@@ -3142,7 +3142,7 @@ static inline __le16 bnx2x_csum_fix(unsigned char *t_header, u16 csum, s8 fix)
|
|||||||
tsum = ~csum_fold(csum_add((__force __wsum) csum,
|
tsum = ~csum_fold(csum_add((__force __wsum) csum,
|
||||||
csum_partial(t_header, -fix, 0)));
|
csum_partial(t_header, -fix, 0)));
|
||||||
|
|
||||||
return bswab16(csum);
|
return bswab16(tsum);
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline u32 bnx2x_xmit_type(struct bnx2x *bp, struct sk_buff *skb)
|
static inline u32 bnx2x_xmit_type(struct bnx2x *bp, struct sk_buff *skb)
|
||||||
|
|||||||
@@ -281,6 +281,8 @@ static int bnx2x_get_settings(struct net_device *dev, struct ethtool_cmd *cmd)
|
|||||||
cmd->lp_advertising |= ADVERTISED_2500baseX_Full;
|
cmd->lp_advertising |= ADVERTISED_2500baseX_Full;
|
||||||
if (status & LINK_STATUS_LINK_PARTNER_10GXFD_CAPABLE)
|
if (status & LINK_STATUS_LINK_PARTNER_10GXFD_CAPABLE)
|
||||||
cmd->lp_advertising |= ADVERTISED_10000baseT_Full;
|
cmd->lp_advertising |= ADVERTISED_10000baseT_Full;
|
||||||
|
if (status & LINK_STATUS_LINK_PARTNER_20GXFD_CAPABLE)
|
||||||
|
cmd->lp_advertising |= ADVERTISED_20000baseKR2_Full;
|
||||||
}
|
}
|
||||||
|
|
||||||
cmd->maxtxpkt = 0;
|
cmd->maxtxpkt = 0;
|
||||||
@@ -463,6 +465,10 @@ static int bnx2x_set_settings(struct net_device *dev, struct ethtool_cmd *cmd)
|
|||||||
ADVERTISED_10000baseKR_Full))
|
ADVERTISED_10000baseKR_Full))
|
||||||
bp->link_params.speed_cap_mask[cfg_idx] |=
|
bp->link_params.speed_cap_mask[cfg_idx] |=
|
||||||
PORT_HW_CFG_SPEED_CAPABILITY_D0_10G;
|
PORT_HW_CFG_SPEED_CAPABILITY_D0_10G;
|
||||||
|
|
||||||
|
if (cmd->advertising & ADVERTISED_20000baseKR2_Full)
|
||||||
|
bp->link_params.speed_cap_mask[cfg_idx] |=
|
||||||
|
PORT_HW_CFG_SPEED_CAPABILITY_D0_20G;
|
||||||
}
|
}
|
||||||
} else { /* forced speed */
|
} else { /* forced speed */
|
||||||
/* advertise the requested speed and duplex if supported */
|
/* advertise the requested speed and duplex if supported */
|
||||||
|
|||||||
@@ -10422,6 +10422,28 @@ static void bnx2x_848xx_set_link_led(struct bnx2x_phy *phy,
|
|||||||
MDIO_PMA_DEVAD,
|
MDIO_PMA_DEVAD,
|
||||||
MDIO_PMA_REG_8481_LED1_MASK,
|
MDIO_PMA_REG_8481_LED1_MASK,
|
||||||
0x0);
|
0x0);
|
||||||
|
if (phy->type ==
|
||||||
|
PORT_HW_CFG_XGXS_EXT_PHY_TYPE_BCM84834) {
|
||||||
|
/* Disable MI_INT interrupt before setting LED4
|
||||||
|
* source to constant off.
|
||||||
|
*/
|
||||||
|
if (REG_RD(bp, NIG_REG_MASK_INTERRUPT_PORT0 +
|
||||||
|
params->port*4) &
|
||||||
|
NIG_MASK_MI_INT) {
|
||||||
|
params->link_flags |=
|
||||||
|
LINK_FLAGS_INT_DISABLED;
|
||||||
|
|
||||||
|
bnx2x_bits_dis(
|
||||||
|
bp,
|
||||||
|
NIG_REG_MASK_INTERRUPT_PORT0 +
|
||||||
|
params->port*4,
|
||||||
|
NIG_MASK_MI_INT);
|
||||||
|
}
|
||||||
|
bnx2x_cl45_write(bp, phy,
|
||||||
|
MDIO_PMA_DEVAD,
|
||||||
|
MDIO_PMA_REG_8481_SIGNAL_MASK,
|
||||||
|
0x0);
|
||||||
|
}
|
||||||
}
|
}
|
||||||
break;
|
break;
|
||||||
case LED_MODE_ON:
|
case LED_MODE_ON:
|
||||||
@@ -10468,6 +10490,28 @@ static void bnx2x_848xx_set_link_led(struct bnx2x_phy *phy,
|
|||||||
MDIO_PMA_DEVAD,
|
MDIO_PMA_DEVAD,
|
||||||
MDIO_PMA_REG_8481_LED1_MASK,
|
MDIO_PMA_REG_8481_LED1_MASK,
|
||||||
0x20);
|
0x20);
|
||||||
|
if (phy->type ==
|
||||||
|
PORT_HW_CFG_XGXS_EXT_PHY_TYPE_BCM84834) {
|
||||||
|
/* Disable MI_INT interrupt before setting LED4
|
||||||
|
* source to constant on.
|
||||||
|
*/
|
||||||
|
if (REG_RD(bp, NIG_REG_MASK_INTERRUPT_PORT0 +
|
||||||
|
params->port*4) &
|
||||||
|
NIG_MASK_MI_INT) {
|
||||||
|
params->link_flags |=
|
||||||
|
LINK_FLAGS_INT_DISABLED;
|
||||||
|
|
||||||
|
bnx2x_bits_dis(
|
||||||
|
bp,
|
||||||
|
NIG_REG_MASK_INTERRUPT_PORT0 +
|
||||||
|
params->port*4,
|
||||||
|
NIG_MASK_MI_INT);
|
||||||
|
}
|
||||||
|
bnx2x_cl45_write(bp, phy,
|
||||||
|
MDIO_PMA_DEVAD,
|
||||||
|
MDIO_PMA_REG_8481_SIGNAL_MASK,
|
||||||
|
0x20);
|
||||||
|
}
|
||||||
}
|
}
|
||||||
break;
|
break;
|
||||||
|
|
||||||
@@ -10532,6 +10576,22 @@ static void bnx2x_848xx_set_link_led(struct bnx2x_phy *phy,
|
|||||||
MDIO_PMA_DEVAD,
|
MDIO_PMA_DEVAD,
|
||||||
MDIO_PMA_REG_8481_LINK_SIGNAL,
|
MDIO_PMA_REG_8481_LINK_SIGNAL,
|
||||||
val);
|
val);
|
||||||
|
if (phy->type ==
|
||||||
|
PORT_HW_CFG_XGXS_EXT_PHY_TYPE_BCM84834) {
|
||||||
|
/* Restore LED4 source to external link,
|
||||||
|
* and re-enable interrupts.
|
||||||
|
*/
|
||||||
|
bnx2x_cl45_write(bp, phy,
|
||||||
|
MDIO_PMA_DEVAD,
|
||||||
|
MDIO_PMA_REG_8481_SIGNAL_MASK,
|
||||||
|
0x40);
|
||||||
|
if (params->link_flags &
|
||||||
|
LINK_FLAGS_INT_DISABLED) {
|
||||||
|
bnx2x_link_int_enable(params);
|
||||||
|
params->link_flags &=
|
||||||
|
~LINK_FLAGS_INT_DISABLED;
|
||||||
|
}
|
||||||
|
}
|
||||||
}
|
}
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
@@ -11791,6 +11851,8 @@ static int bnx2x_populate_int_phy(struct bnx2x *bp, u32 shmem_base, u8 port,
|
|||||||
phy->media_type = ETH_PHY_KR;
|
phy->media_type = ETH_PHY_KR;
|
||||||
phy->flags |= FLAGS_WC_DUAL_MODE;
|
phy->flags |= FLAGS_WC_DUAL_MODE;
|
||||||
phy->supported &= (SUPPORTED_20000baseKR2_Full |
|
phy->supported &= (SUPPORTED_20000baseKR2_Full |
|
||||||
|
SUPPORTED_10000baseT_Full |
|
||||||
|
SUPPORTED_1000baseT_Full |
|
||||||
SUPPORTED_Autoneg |
|
SUPPORTED_Autoneg |
|
||||||
SUPPORTED_FIBRE |
|
SUPPORTED_FIBRE |
|
||||||
SUPPORTED_Pause |
|
SUPPORTED_Pause |
|
||||||
@@ -13437,7 +13499,7 @@ void bnx2x_period_func(struct link_params *params, struct link_vars *vars)
|
|||||||
struct bnx2x_phy *phy = ¶ms->phy[INT_PHY];
|
struct bnx2x_phy *phy = ¶ms->phy[INT_PHY];
|
||||||
bnx2x_set_aer_mmd(params, phy);
|
bnx2x_set_aer_mmd(params, phy);
|
||||||
if ((phy->supported & SUPPORTED_20000baseKR2_Full) &&
|
if ((phy->supported & SUPPORTED_20000baseKR2_Full) &&
|
||||||
(phy->speed_cap_mask & SPEED_20000))
|
(phy->speed_cap_mask & PORT_HW_CFG_SPEED_CAPABILITY_D0_20G))
|
||||||
bnx2x_check_kr2_wa(params, vars, phy);
|
bnx2x_check_kr2_wa(params, vars, phy);
|
||||||
bnx2x_check_over_curr(params, vars);
|
bnx2x_check_over_curr(params, vars);
|
||||||
if (vars->rx_tx_asic_rst)
|
if (vars->rx_tx_asic_rst)
|
||||||
|
|||||||
@@ -307,7 +307,8 @@ struct link_params {
|
|||||||
struct bnx2x *bp;
|
struct bnx2x *bp;
|
||||||
u16 req_fc_auto_adv; /* Should be set to TX / BOTH when
|
u16 req_fc_auto_adv; /* Should be set to TX / BOTH when
|
||||||
req_flow_ctrl is set to AUTO */
|
req_flow_ctrl is set to AUTO */
|
||||||
u16 rsrv1;
|
u16 link_flags;
|
||||||
|
#define LINK_FLAGS_INT_DISABLED (1<<0)
|
||||||
u32 lfa_base;
|
u32 lfa_base;
|
||||||
};
|
};
|
||||||
|
|
||||||
|
|||||||
@@ -246,14 +246,13 @@ fec_enet_start_xmit(struct sk_buff *skb, struct net_device *ndev)
|
|||||||
struct bufdesc *bdp;
|
struct bufdesc *bdp;
|
||||||
void *bufaddr;
|
void *bufaddr;
|
||||||
unsigned short status;
|
unsigned short status;
|
||||||
unsigned long flags;
|
unsigned int index;
|
||||||
|
|
||||||
if (!fep->link) {
|
if (!fep->link) {
|
||||||
/* Link is down or autonegotiation is in progress. */
|
/* Link is down or autonegotiation is in progress. */
|
||||||
return NETDEV_TX_BUSY;
|
return NETDEV_TX_BUSY;
|
||||||
}
|
}
|
||||||
|
|
||||||
spin_lock_irqsave(&fep->hw_lock, flags);
|
|
||||||
/* Fill in a Tx ring entry */
|
/* Fill in a Tx ring entry */
|
||||||
bdp = fep->cur_tx;
|
bdp = fep->cur_tx;
|
||||||
|
|
||||||
@@ -264,7 +263,6 @@ fec_enet_start_xmit(struct sk_buff *skb, struct net_device *ndev)
|
|||||||
* This should not happen, since ndev->tbusy should be set.
|
* This should not happen, since ndev->tbusy should be set.
|
||||||
*/
|
*/
|
||||||
printk("%s: tx queue full!.\n", ndev->name);
|
printk("%s: tx queue full!.\n", ndev->name);
|
||||||
spin_unlock_irqrestore(&fep->hw_lock, flags);
|
|
||||||
return NETDEV_TX_BUSY;
|
return NETDEV_TX_BUSY;
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -280,13 +278,13 @@ fec_enet_start_xmit(struct sk_buff *skb, struct net_device *ndev)
|
|||||||
* 4-byte boundaries. Use bounce buffers to copy data
|
* 4-byte boundaries. Use bounce buffers to copy data
|
||||||
* and get it aligned. Ugh.
|
* and get it aligned. Ugh.
|
||||||
*/
|
*/
|
||||||
|
if (fep->bufdesc_ex)
|
||||||
|
index = (struct bufdesc_ex *)bdp -
|
||||||
|
(struct bufdesc_ex *)fep->tx_bd_base;
|
||||||
|
else
|
||||||
|
index = bdp - fep->tx_bd_base;
|
||||||
|
|
||||||
if (((unsigned long) bufaddr) & FEC_ALIGNMENT) {
|
if (((unsigned long) bufaddr) & FEC_ALIGNMENT) {
|
||||||
unsigned int index;
|
|
||||||
if (fep->bufdesc_ex)
|
|
||||||
index = (struct bufdesc_ex *)bdp -
|
|
||||||
(struct bufdesc_ex *)fep->tx_bd_base;
|
|
||||||
else
|
|
||||||
index = bdp - fep->tx_bd_base;
|
|
||||||
memcpy(fep->tx_bounce[index], skb->data, skb->len);
|
memcpy(fep->tx_bounce[index], skb->data, skb->len);
|
||||||
bufaddr = fep->tx_bounce[index];
|
bufaddr = fep->tx_bounce[index];
|
||||||
}
|
}
|
||||||
@@ -300,10 +298,7 @@ fec_enet_start_xmit(struct sk_buff *skb, struct net_device *ndev)
|
|||||||
swap_buffer(bufaddr, skb->len);
|
swap_buffer(bufaddr, skb->len);
|
||||||
|
|
||||||
/* Save skb pointer */
|
/* Save skb pointer */
|
||||||
fep->tx_skbuff[fep->skb_cur] = skb;
|
fep->tx_skbuff[index] = skb;
|
||||||
|
|
||||||
ndev->stats.tx_bytes += skb->len;
|
|
||||||
fep->skb_cur = (fep->skb_cur+1) & TX_RING_MOD_MASK;
|
|
||||||
|
|
||||||
/* Push the data cache so the CPM does not get stale memory
|
/* Push the data cache so the CPM does not get stale memory
|
||||||
* data.
|
* data.
|
||||||
@@ -331,25 +326,21 @@ fec_enet_start_xmit(struct sk_buff *skb, struct net_device *ndev)
|
|||||||
ebdp->cbd_esc = BD_ENET_TX_INT;
|
ebdp->cbd_esc = BD_ENET_TX_INT;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
/* Trigger transmission start */
|
|
||||||
writel(0, fep->hwp + FEC_X_DES_ACTIVE);
|
|
||||||
|
|
||||||
/* If this was the last BD in the ring, start at the beginning again. */
|
/* If this was the last BD in the ring, start at the beginning again. */
|
||||||
if (status & BD_ENET_TX_WRAP)
|
if (status & BD_ENET_TX_WRAP)
|
||||||
bdp = fep->tx_bd_base;
|
bdp = fep->tx_bd_base;
|
||||||
else
|
else
|
||||||
bdp = fec_enet_get_nextdesc(bdp, fep->bufdesc_ex);
|
bdp = fec_enet_get_nextdesc(bdp, fep->bufdesc_ex);
|
||||||
|
|
||||||
if (bdp == fep->dirty_tx) {
|
|
||||||
fep->tx_full = 1;
|
|
||||||
netif_stop_queue(ndev);
|
|
||||||
}
|
|
||||||
|
|
||||||
fep->cur_tx = bdp;
|
fep->cur_tx = bdp;
|
||||||
|
|
||||||
skb_tx_timestamp(skb);
|
if (fep->cur_tx == fep->dirty_tx)
|
||||||
|
netif_stop_queue(ndev);
|
||||||
|
|
||||||
spin_unlock_irqrestore(&fep->hw_lock, flags);
|
/* Trigger transmission start */
|
||||||
|
writel(0, fep->hwp + FEC_X_DES_ACTIVE);
|
||||||
|
|
||||||
|
skb_tx_timestamp(skb);
|
||||||
|
|
||||||
return NETDEV_TX_OK;
|
return NETDEV_TX_OK;
|
||||||
}
|
}
|
||||||
@@ -406,11 +397,8 @@ fec_restart(struct net_device *ndev, int duplex)
|
|||||||
writel((unsigned long)fep->bd_dma + sizeof(struct bufdesc)
|
writel((unsigned long)fep->bd_dma + sizeof(struct bufdesc)
|
||||||
* RX_RING_SIZE, fep->hwp + FEC_X_DES_START);
|
* RX_RING_SIZE, fep->hwp + FEC_X_DES_START);
|
||||||
|
|
||||||
fep->dirty_tx = fep->cur_tx = fep->tx_bd_base;
|
|
||||||
fep->cur_rx = fep->rx_bd_base;
|
fep->cur_rx = fep->rx_bd_base;
|
||||||
|
|
||||||
/* Reset SKB transmit buffers. */
|
|
||||||
fep->skb_cur = fep->skb_dirty = 0;
|
|
||||||
for (i = 0; i <= TX_RING_MOD_MASK; i++) {
|
for (i = 0; i <= TX_RING_MOD_MASK; i++) {
|
||||||
if (fep->tx_skbuff[i]) {
|
if (fep->tx_skbuff[i]) {
|
||||||
dev_kfree_skb_any(fep->tx_skbuff[i]);
|
dev_kfree_skb_any(fep->tx_skbuff[i]);
|
||||||
@@ -573,20 +561,35 @@ fec_enet_tx(struct net_device *ndev)
|
|||||||
struct bufdesc *bdp;
|
struct bufdesc *bdp;
|
||||||
unsigned short status;
|
unsigned short status;
|
||||||
struct sk_buff *skb;
|
struct sk_buff *skb;
|
||||||
|
int index = 0;
|
||||||
|
|
||||||
fep = netdev_priv(ndev);
|
fep = netdev_priv(ndev);
|
||||||
spin_lock(&fep->hw_lock);
|
|
||||||
bdp = fep->dirty_tx;
|
bdp = fep->dirty_tx;
|
||||||
|
|
||||||
|
/* get next bdp of dirty_tx */
|
||||||
|
if (bdp->cbd_sc & BD_ENET_TX_WRAP)
|
||||||
|
bdp = fep->tx_bd_base;
|
||||||
|
else
|
||||||
|
bdp = fec_enet_get_nextdesc(bdp, fep->bufdesc_ex);
|
||||||
|
|
||||||
while (((status = bdp->cbd_sc) & BD_ENET_TX_READY) == 0) {
|
while (((status = bdp->cbd_sc) & BD_ENET_TX_READY) == 0) {
|
||||||
if (bdp == fep->cur_tx && fep->tx_full == 0)
|
|
||||||
|
/* current queue is empty */
|
||||||
|
if (bdp == fep->cur_tx)
|
||||||
break;
|
break;
|
||||||
|
|
||||||
|
if (fep->bufdesc_ex)
|
||||||
|
index = (struct bufdesc_ex *)bdp -
|
||||||
|
(struct bufdesc_ex *)fep->tx_bd_base;
|
||||||
|
else
|
||||||
|
index = bdp - fep->tx_bd_base;
|
||||||
|
|
||||||
dma_unmap_single(&fep->pdev->dev, bdp->cbd_bufaddr,
|
dma_unmap_single(&fep->pdev->dev, bdp->cbd_bufaddr,
|
||||||
FEC_ENET_TX_FRSIZE, DMA_TO_DEVICE);
|
FEC_ENET_TX_FRSIZE, DMA_TO_DEVICE);
|
||||||
bdp->cbd_bufaddr = 0;
|
bdp->cbd_bufaddr = 0;
|
||||||
|
|
||||||
skb = fep->tx_skbuff[fep->skb_dirty];
|
skb = fep->tx_skbuff[index];
|
||||||
|
|
||||||
/* Check for errors. */
|
/* Check for errors. */
|
||||||
if (status & (BD_ENET_TX_HB | BD_ENET_TX_LC |
|
if (status & (BD_ENET_TX_HB | BD_ENET_TX_LC |
|
||||||
BD_ENET_TX_RL | BD_ENET_TX_UN |
|
BD_ENET_TX_RL | BD_ENET_TX_UN |
|
||||||
@@ -631,8 +634,9 @@ fec_enet_tx(struct net_device *ndev)
|
|||||||
|
|
||||||
/* Free the sk buffer associated with this last transmit */
|
/* Free the sk buffer associated with this last transmit */
|
||||||
dev_kfree_skb_any(skb);
|
dev_kfree_skb_any(skb);
|
||||||
fep->tx_skbuff[fep->skb_dirty] = NULL;
|
fep->tx_skbuff[index] = NULL;
|
||||||
fep->skb_dirty = (fep->skb_dirty + 1) & TX_RING_MOD_MASK;
|
|
||||||
|
fep->dirty_tx = bdp;
|
||||||
|
|
||||||
/* Update pointer to next buffer descriptor to be transmitted */
|
/* Update pointer to next buffer descriptor to be transmitted */
|
||||||
if (status & BD_ENET_TX_WRAP)
|
if (status & BD_ENET_TX_WRAP)
|
||||||
@@ -642,14 +646,12 @@ fec_enet_tx(struct net_device *ndev)
|
|||||||
|
|
||||||
/* Since we have freed up a buffer, the ring is no longer full
|
/* Since we have freed up a buffer, the ring is no longer full
|
||||||
*/
|
*/
|
||||||
if (fep->tx_full) {
|
if (fep->dirty_tx != fep->cur_tx) {
|
||||||
fep->tx_full = 0;
|
|
||||||
if (netif_queue_stopped(ndev))
|
if (netif_queue_stopped(ndev))
|
||||||
netif_wake_queue(ndev);
|
netif_wake_queue(ndev);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
fep->dirty_tx = bdp;
|
return;
|
||||||
spin_unlock(&fep->hw_lock);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
@@ -816,7 +818,7 @@ fec_enet_interrupt(int irq, void *dev_id)
|
|||||||
int_events = readl(fep->hwp + FEC_IEVENT);
|
int_events = readl(fep->hwp + FEC_IEVENT);
|
||||||
writel(int_events, fep->hwp + FEC_IEVENT);
|
writel(int_events, fep->hwp + FEC_IEVENT);
|
||||||
|
|
||||||
if (int_events & FEC_ENET_RXF) {
|
if (int_events & (FEC_ENET_RXF | FEC_ENET_TXF)) {
|
||||||
ret = IRQ_HANDLED;
|
ret = IRQ_HANDLED;
|
||||||
|
|
||||||
/* Disable the RX interrupt */
|
/* Disable the RX interrupt */
|
||||||
@@ -827,15 +829,6 @@ fec_enet_interrupt(int irq, void *dev_id)
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Transmit OK, or non-fatal error. Update the buffer
|
|
||||||
* descriptors. FEC handles all errors, we just discover
|
|
||||||
* them as part of the transmit process.
|
|
||||||
*/
|
|
||||||
if (int_events & FEC_ENET_TXF) {
|
|
||||||
ret = IRQ_HANDLED;
|
|
||||||
fec_enet_tx(ndev);
|
|
||||||
}
|
|
||||||
|
|
||||||
if (int_events & FEC_ENET_MII) {
|
if (int_events & FEC_ENET_MII) {
|
||||||
ret = IRQ_HANDLED;
|
ret = IRQ_HANDLED;
|
||||||
complete(&fep->mdio_done);
|
complete(&fep->mdio_done);
|
||||||
@@ -851,6 +844,8 @@ static int fec_enet_rx_napi(struct napi_struct *napi, int budget)
|
|||||||
int pkts = fec_enet_rx(ndev, budget);
|
int pkts = fec_enet_rx(ndev, budget);
|
||||||
struct fec_enet_private *fep = netdev_priv(ndev);
|
struct fec_enet_private *fep = netdev_priv(ndev);
|
||||||
|
|
||||||
|
fec_enet_tx(ndev);
|
||||||
|
|
||||||
if (pkts < budget) {
|
if (pkts < budget) {
|
||||||
napi_complete(napi);
|
napi_complete(napi);
|
||||||
writel(FEC_DEFAULT_IMASK, fep->hwp + FEC_IMASK);
|
writel(FEC_DEFAULT_IMASK, fep->hwp + FEC_IMASK);
|
||||||
@@ -1646,6 +1641,7 @@ static int fec_enet_init(struct net_device *ndev)
|
|||||||
|
|
||||||
/* ...and the same for transmit */
|
/* ...and the same for transmit */
|
||||||
bdp = fep->tx_bd_base;
|
bdp = fep->tx_bd_base;
|
||||||
|
fep->cur_tx = bdp;
|
||||||
for (i = 0; i < TX_RING_SIZE; i++) {
|
for (i = 0; i < TX_RING_SIZE; i++) {
|
||||||
|
|
||||||
/* Initialize the BD for every fragment in the page. */
|
/* Initialize the BD for every fragment in the page. */
|
||||||
@@ -1657,6 +1653,7 @@ static int fec_enet_init(struct net_device *ndev)
|
|||||||
/* Set the last buffer to wrap */
|
/* Set the last buffer to wrap */
|
||||||
bdp = fec_enet_get_prevdesc(bdp, fep->bufdesc_ex);
|
bdp = fec_enet_get_prevdesc(bdp, fep->bufdesc_ex);
|
||||||
bdp->cbd_sc |= BD_SC_WRAP;
|
bdp->cbd_sc |= BD_SC_WRAP;
|
||||||
|
fep->dirty_tx = bdp;
|
||||||
|
|
||||||
fec_restart(ndev, 0);
|
fec_restart(ndev, 0);
|
||||||
|
|
||||||
|
|||||||
@@ -97,6 +97,13 @@ struct bufdesc {
|
|||||||
unsigned short cbd_sc; /* Control and status info */
|
unsigned short cbd_sc; /* Control and status info */
|
||||||
unsigned long cbd_bufaddr; /* Buffer address */
|
unsigned long cbd_bufaddr; /* Buffer address */
|
||||||
};
|
};
|
||||||
|
#else
|
||||||
|
struct bufdesc {
|
||||||
|
unsigned short cbd_sc; /* Control and status info */
|
||||||
|
unsigned short cbd_datlen; /* Data length */
|
||||||
|
unsigned long cbd_bufaddr; /* Buffer address */
|
||||||
|
};
|
||||||
|
#endif
|
||||||
|
|
||||||
struct bufdesc_ex {
|
struct bufdesc_ex {
|
||||||
struct bufdesc desc;
|
struct bufdesc desc;
|
||||||
@@ -107,14 +114,6 @@ struct bufdesc_ex {
|
|||||||
unsigned short res0[4];
|
unsigned short res0[4];
|
||||||
};
|
};
|
||||||
|
|
||||||
#else
|
|
||||||
struct bufdesc {
|
|
||||||
unsigned short cbd_sc; /* Control and status info */
|
|
||||||
unsigned short cbd_datlen; /* Data length */
|
|
||||||
unsigned long cbd_bufaddr; /* Buffer address */
|
|
||||||
};
|
|
||||||
#endif
|
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* The following definitions courtesy of commproc.h, which where
|
* The following definitions courtesy of commproc.h, which where
|
||||||
* Copyright (c) 1997 Dan Malek (dmalek@jlc.net).
|
* Copyright (c) 1997 Dan Malek (dmalek@jlc.net).
|
||||||
@@ -214,8 +213,6 @@ struct fec_enet_private {
|
|||||||
unsigned char *tx_bounce[TX_RING_SIZE];
|
unsigned char *tx_bounce[TX_RING_SIZE];
|
||||||
struct sk_buff *tx_skbuff[TX_RING_SIZE];
|
struct sk_buff *tx_skbuff[TX_RING_SIZE];
|
||||||
struct sk_buff *rx_skbuff[RX_RING_SIZE];
|
struct sk_buff *rx_skbuff[RX_RING_SIZE];
|
||||||
ushort skb_cur;
|
|
||||||
ushort skb_dirty;
|
|
||||||
|
|
||||||
/* CPM dual port RAM relative addresses */
|
/* CPM dual port RAM relative addresses */
|
||||||
dma_addr_t bd_dma;
|
dma_addr_t bd_dma;
|
||||||
@@ -227,7 +224,6 @@ struct fec_enet_private {
|
|||||||
/* The ring entries to be free()ed */
|
/* The ring entries to be free()ed */
|
||||||
struct bufdesc *dirty_tx;
|
struct bufdesc *dirty_tx;
|
||||||
|
|
||||||
uint tx_full;
|
|
||||||
/* hold while accessing the HW like ringbuffer for tx/rx but not MAC */
|
/* hold while accessing the HW like ringbuffer for tx/rx but not MAC */
|
||||||
spinlock_t hw_lock;
|
spinlock_t hw_lock;
|
||||||
|
|
||||||
|
|||||||
@@ -4765,8 +4765,10 @@ static void rtl_hw_start_8168bb(struct rtl8169_private *tp)
|
|||||||
|
|
||||||
RTL_W16(CPlusCmd, RTL_R16(CPlusCmd) & ~R8168_CPCMD_QUIRK_MASK);
|
RTL_W16(CPlusCmd, RTL_R16(CPlusCmd) & ~R8168_CPCMD_QUIRK_MASK);
|
||||||
|
|
||||||
rtl_tx_performance_tweak(pdev,
|
if (tp->dev->mtu <= ETH_DATA_LEN) {
|
||||||
(0x5 << MAX_READ_REQUEST_SHIFT) | PCI_EXP_DEVCTL_NOSNOOP_EN);
|
rtl_tx_performance_tweak(pdev, (0x5 << MAX_READ_REQUEST_SHIFT) |
|
||||||
|
PCI_EXP_DEVCTL_NOSNOOP_EN);
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
static void rtl_hw_start_8168bef(struct rtl8169_private *tp)
|
static void rtl_hw_start_8168bef(struct rtl8169_private *tp)
|
||||||
@@ -4789,7 +4791,8 @@ static void __rtl_hw_start_8168cp(struct rtl8169_private *tp)
|
|||||||
|
|
||||||
RTL_W8(Config3, RTL_R8(Config3) & ~Beacon_en);
|
RTL_W8(Config3, RTL_R8(Config3) & ~Beacon_en);
|
||||||
|
|
||||||
rtl_tx_performance_tweak(pdev, 0x5 << MAX_READ_REQUEST_SHIFT);
|
if (tp->dev->mtu <= ETH_DATA_LEN)
|
||||||
|
rtl_tx_performance_tweak(pdev, 0x5 << MAX_READ_REQUEST_SHIFT);
|
||||||
|
|
||||||
rtl_disable_clock_request(pdev);
|
rtl_disable_clock_request(pdev);
|
||||||
|
|
||||||
@@ -4822,7 +4825,8 @@ static void rtl_hw_start_8168cp_2(struct rtl8169_private *tp)
|
|||||||
|
|
||||||
RTL_W8(Config3, RTL_R8(Config3) & ~Beacon_en);
|
RTL_W8(Config3, RTL_R8(Config3) & ~Beacon_en);
|
||||||
|
|
||||||
rtl_tx_performance_tweak(pdev, 0x5 << MAX_READ_REQUEST_SHIFT);
|
if (tp->dev->mtu <= ETH_DATA_LEN)
|
||||||
|
rtl_tx_performance_tweak(pdev, 0x5 << MAX_READ_REQUEST_SHIFT);
|
||||||
|
|
||||||
RTL_W16(CPlusCmd, RTL_R16(CPlusCmd) & ~R8168_CPCMD_QUIRK_MASK);
|
RTL_W16(CPlusCmd, RTL_R16(CPlusCmd) & ~R8168_CPCMD_QUIRK_MASK);
|
||||||
}
|
}
|
||||||
@@ -4841,7 +4845,8 @@ static void rtl_hw_start_8168cp_3(struct rtl8169_private *tp)
|
|||||||
|
|
||||||
RTL_W8(MaxTxPacketSize, TxPacketMax);
|
RTL_W8(MaxTxPacketSize, TxPacketMax);
|
||||||
|
|
||||||
rtl_tx_performance_tweak(pdev, 0x5 << MAX_READ_REQUEST_SHIFT);
|
if (tp->dev->mtu <= ETH_DATA_LEN)
|
||||||
|
rtl_tx_performance_tweak(pdev, 0x5 << MAX_READ_REQUEST_SHIFT);
|
||||||
|
|
||||||
RTL_W16(CPlusCmd, RTL_R16(CPlusCmd) & ~R8168_CPCMD_QUIRK_MASK);
|
RTL_W16(CPlusCmd, RTL_R16(CPlusCmd) & ~R8168_CPCMD_QUIRK_MASK);
|
||||||
}
|
}
|
||||||
@@ -4901,7 +4906,8 @@ static void rtl_hw_start_8168d(struct rtl8169_private *tp)
|
|||||||
|
|
||||||
RTL_W8(MaxTxPacketSize, TxPacketMax);
|
RTL_W8(MaxTxPacketSize, TxPacketMax);
|
||||||
|
|
||||||
rtl_tx_performance_tweak(pdev, 0x5 << MAX_READ_REQUEST_SHIFT);
|
if (tp->dev->mtu <= ETH_DATA_LEN)
|
||||||
|
rtl_tx_performance_tweak(pdev, 0x5 << MAX_READ_REQUEST_SHIFT);
|
||||||
|
|
||||||
RTL_W16(CPlusCmd, RTL_R16(CPlusCmd) & ~R8168_CPCMD_QUIRK_MASK);
|
RTL_W16(CPlusCmd, RTL_R16(CPlusCmd) & ~R8168_CPCMD_QUIRK_MASK);
|
||||||
}
|
}
|
||||||
@@ -4913,7 +4919,8 @@ static void rtl_hw_start_8168dp(struct rtl8169_private *tp)
|
|||||||
|
|
||||||
rtl_csi_access_enable_1(tp);
|
rtl_csi_access_enable_1(tp);
|
||||||
|
|
||||||
rtl_tx_performance_tweak(pdev, 0x5 << MAX_READ_REQUEST_SHIFT);
|
if (tp->dev->mtu <= ETH_DATA_LEN)
|
||||||
|
rtl_tx_performance_tweak(pdev, 0x5 << MAX_READ_REQUEST_SHIFT);
|
||||||
|
|
||||||
RTL_W8(MaxTxPacketSize, TxPacketMax);
|
RTL_W8(MaxTxPacketSize, TxPacketMax);
|
||||||
|
|
||||||
@@ -4972,7 +4979,8 @@ static void rtl_hw_start_8168e_1(struct rtl8169_private *tp)
|
|||||||
|
|
||||||
rtl_ephy_init(tp, e_info_8168e_1, ARRAY_SIZE(e_info_8168e_1));
|
rtl_ephy_init(tp, e_info_8168e_1, ARRAY_SIZE(e_info_8168e_1));
|
||||||
|
|
||||||
rtl_tx_performance_tweak(pdev, 0x5 << MAX_READ_REQUEST_SHIFT);
|
if (tp->dev->mtu <= ETH_DATA_LEN)
|
||||||
|
rtl_tx_performance_tweak(pdev, 0x5 << MAX_READ_REQUEST_SHIFT);
|
||||||
|
|
||||||
RTL_W8(MaxTxPacketSize, TxPacketMax);
|
RTL_W8(MaxTxPacketSize, TxPacketMax);
|
||||||
|
|
||||||
@@ -4998,7 +5006,8 @@ static void rtl_hw_start_8168e_2(struct rtl8169_private *tp)
|
|||||||
|
|
||||||
rtl_ephy_init(tp, e_info_8168e_2, ARRAY_SIZE(e_info_8168e_2));
|
rtl_ephy_init(tp, e_info_8168e_2, ARRAY_SIZE(e_info_8168e_2));
|
||||||
|
|
||||||
rtl_tx_performance_tweak(pdev, 0x5 << MAX_READ_REQUEST_SHIFT);
|
if (tp->dev->mtu <= ETH_DATA_LEN)
|
||||||
|
rtl_tx_performance_tweak(pdev, 0x5 << MAX_READ_REQUEST_SHIFT);
|
||||||
|
|
||||||
rtl_eri_write(tp, 0xc0, ERIAR_MASK_0011, 0x0000, ERIAR_EXGMAC);
|
rtl_eri_write(tp, 0xc0, ERIAR_MASK_0011, 0x0000, ERIAR_EXGMAC);
|
||||||
rtl_eri_write(tp, 0xb8, ERIAR_MASK_0011, 0x0000, ERIAR_EXGMAC);
|
rtl_eri_write(tp, 0xb8, ERIAR_MASK_0011, 0x0000, ERIAR_EXGMAC);
|
||||||
|
|||||||
@@ -779,6 +779,7 @@ efx_realloc_channels(struct efx_nic *efx, u32 rxq_entries, u32 txq_entries)
|
|||||||
tx_queue->txd.entries);
|
tx_queue->txd.entries);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
efx_device_detach_sync(efx);
|
||||||
efx_stop_all(efx);
|
efx_stop_all(efx);
|
||||||
efx_stop_interrupts(efx, true);
|
efx_stop_interrupts(efx, true);
|
||||||
|
|
||||||
@@ -832,6 +833,7 @@ out:
|
|||||||
|
|
||||||
efx_start_interrupts(efx, true);
|
efx_start_interrupts(efx, true);
|
||||||
efx_start_all(efx);
|
efx_start_all(efx);
|
||||||
|
netif_device_attach(efx->net_dev);
|
||||||
return rc;
|
return rc;
|
||||||
|
|
||||||
rollback:
|
rollback:
|
||||||
@@ -1641,8 +1643,12 @@ static void efx_stop_all(struct efx_nic *efx)
|
|||||||
/* Flush efx_mac_work(), refill_workqueue, monitor_work */
|
/* Flush efx_mac_work(), refill_workqueue, monitor_work */
|
||||||
efx_flush_all(efx);
|
efx_flush_all(efx);
|
||||||
|
|
||||||
/* Stop the kernel transmit interface late, so the watchdog
|
/* Stop the kernel transmit interface. This is only valid if
|
||||||
* timer isn't ticking over the flush */
|
* the device is stopped or detached; otherwise the watchdog
|
||||||
|
* may fire immediately.
|
||||||
|
*/
|
||||||
|
WARN_ON(netif_running(efx->net_dev) &&
|
||||||
|
netif_device_present(efx->net_dev));
|
||||||
netif_tx_disable(efx->net_dev);
|
netif_tx_disable(efx->net_dev);
|
||||||
|
|
||||||
efx_stop_datapath(efx);
|
efx_stop_datapath(efx);
|
||||||
@@ -1963,16 +1969,18 @@ static int efx_change_mtu(struct net_device *net_dev, int new_mtu)
|
|||||||
if (new_mtu > EFX_MAX_MTU)
|
if (new_mtu > EFX_MAX_MTU)
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
|
|
||||||
efx_stop_all(efx);
|
|
||||||
|
|
||||||
netif_dbg(efx, drv, efx->net_dev, "changing MTU to %d\n", new_mtu);
|
netif_dbg(efx, drv, efx->net_dev, "changing MTU to %d\n", new_mtu);
|
||||||
|
|
||||||
|
efx_device_detach_sync(efx);
|
||||||
|
efx_stop_all(efx);
|
||||||
|
|
||||||
mutex_lock(&efx->mac_lock);
|
mutex_lock(&efx->mac_lock);
|
||||||
net_dev->mtu = new_mtu;
|
net_dev->mtu = new_mtu;
|
||||||
efx->type->reconfigure_mac(efx);
|
efx->type->reconfigure_mac(efx);
|
||||||
mutex_unlock(&efx->mac_lock);
|
mutex_unlock(&efx->mac_lock);
|
||||||
|
|
||||||
efx_start_all(efx);
|
efx_start_all(efx);
|
||||||
|
netif_device_attach(efx->net_dev);
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -210,6 +210,7 @@ struct efx_tx_queue {
|
|||||||
* Will be %NULL if the buffer slot is currently free.
|
* Will be %NULL if the buffer slot is currently free.
|
||||||
* @page: The associated page buffer. Valif iff @flags & %EFX_RX_BUF_PAGE.
|
* @page: The associated page buffer. Valif iff @flags & %EFX_RX_BUF_PAGE.
|
||||||
* Will be %NULL if the buffer slot is currently free.
|
* Will be %NULL if the buffer slot is currently free.
|
||||||
|
* @page_offset: Offset within page. Valid iff @flags & %EFX_RX_BUF_PAGE.
|
||||||
* @len: Buffer length, in bytes.
|
* @len: Buffer length, in bytes.
|
||||||
* @flags: Flags for buffer and packet state.
|
* @flags: Flags for buffer and packet state.
|
||||||
*/
|
*/
|
||||||
@@ -219,7 +220,8 @@ struct efx_rx_buffer {
|
|||||||
struct sk_buff *skb;
|
struct sk_buff *skb;
|
||||||
struct page *page;
|
struct page *page;
|
||||||
} u;
|
} u;
|
||||||
unsigned int len;
|
u16 page_offset;
|
||||||
|
u16 len;
|
||||||
u16 flags;
|
u16 flags;
|
||||||
};
|
};
|
||||||
#define EFX_RX_BUF_PAGE 0x0001
|
#define EFX_RX_BUF_PAGE 0x0001
|
||||||
|
|||||||
@@ -90,11 +90,7 @@ static unsigned int rx_refill_threshold;
|
|||||||
static inline unsigned int efx_rx_buf_offset(struct efx_nic *efx,
|
static inline unsigned int efx_rx_buf_offset(struct efx_nic *efx,
|
||||||
struct efx_rx_buffer *buf)
|
struct efx_rx_buffer *buf)
|
||||||
{
|
{
|
||||||
/* Offset is always within one page, so we don't need to consider
|
return buf->page_offset + efx->type->rx_buffer_hash_size;
|
||||||
* the page order.
|
|
||||||
*/
|
|
||||||
return ((unsigned int) buf->dma_addr & (PAGE_SIZE - 1)) +
|
|
||||||
efx->type->rx_buffer_hash_size;
|
|
||||||
}
|
}
|
||||||
static inline unsigned int efx_rx_buf_size(struct efx_nic *efx)
|
static inline unsigned int efx_rx_buf_size(struct efx_nic *efx)
|
||||||
{
|
{
|
||||||
@@ -187,6 +183,7 @@ static int efx_init_rx_buffers_page(struct efx_rx_queue *rx_queue)
|
|||||||
struct efx_nic *efx = rx_queue->efx;
|
struct efx_nic *efx = rx_queue->efx;
|
||||||
struct efx_rx_buffer *rx_buf;
|
struct efx_rx_buffer *rx_buf;
|
||||||
struct page *page;
|
struct page *page;
|
||||||
|
unsigned int page_offset;
|
||||||
struct efx_rx_page_state *state;
|
struct efx_rx_page_state *state;
|
||||||
dma_addr_t dma_addr;
|
dma_addr_t dma_addr;
|
||||||
unsigned index, count;
|
unsigned index, count;
|
||||||
@@ -211,12 +208,14 @@ static int efx_init_rx_buffers_page(struct efx_rx_queue *rx_queue)
|
|||||||
state->dma_addr = dma_addr;
|
state->dma_addr = dma_addr;
|
||||||
|
|
||||||
dma_addr += sizeof(struct efx_rx_page_state);
|
dma_addr += sizeof(struct efx_rx_page_state);
|
||||||
|
page_offset = sizeof(struct efx_rx_page_state);
|
||||||
|
|
||||||
split:
|
split:
|
||||||
index = rx_queue->added_count & rx_queue->ptr_mask;
|
index = rx_queue->added_count & rx_queue->ptr_mask;
|
||||||
rx_buf = efx_rx_buffer(rx_queue, index);
|
rx_buf = efx_rx_buffer(rx_queue, index);
|
||||||
rx_buf->dma_addr = dma_addr + EFX_PAGE_IP_ALIGN;
|
rx_buf->dma_addr = dma_addr + EFX_PAGE_IP_ALIGN;
|
||||||
rx_buf->u.page = page;
|
rx_buf->u.page = page;
|
||||||
|
rx_buf->page_offset = page_offset;
|
||||||
rx_buf->len = efx->rx_buffer_len - EFX_PAGE_IP_ALIGN;
|
rx_buf->len = efx->rx_buffer_len - EFX_PAGE_IP_ALIGN;
|
||||||
rx_buf->flags = EFX_RX_BUF_PAGE;
|
rx_buf->flags = EFX_RX_BUF_PAGE;
|
||||||
++rx_queue->added_count;
|
++rx_queue->added_count;
|
||||||
@@ -227,6 +226,7 @@ static int efx_init_rx_buffers_page(struct efx_rx_queue *rx_queue)
|
|||||||
/* Use the second half of the page */
|
/* Use the second half of the page */
|
||||||
get_page(page);
|
get_page(page);
|
||||||
dma_addr += (PAGE_SIZE >> 1);
|
dma_addr += (PAGE_SIZE >> 1);
|
||||||
|
page_offset += (PAGE_SIZE >> 1);
|
||||||
++count;
|
++count;
|
||||||
goto split;
|
goto split;
|
||||||
}
|
}
|
||||||
@@ -236,7 +236,8 @@ static int efx_init_rx_buffers_page(struct efx_rx_queue *rx_queue)
|
|||||||
}
|
}
|
||||||
|
|
||||||
static void efx_unmap_rx_buffer(struct efx_nic *efx,
|
static void efx_unmap_rx_buffer(struct efx_nic *efx,
|
||||||
struct efx_rx_buffer *rx_buf)
|
struct efx_rx_buffer *rx_buf,
|
||||||
|
unsigned int used_len)
|
||||||
{
|
{
|
||||||
if ((rx_buf->flags & EFX_RX_BUF_PAGE) && rx_buf->u.page) {
|
if ((rx_buf->flags & EFX_RX_BUF_PAGE) && rx_buf->u.page) {
|
||||||
struct efx_rx_page_state *state;
|
struct efx_rx_page_state *state;
|
||||||
@@ -247,6 +248,10 @@ static void efx_unmap_rx_buffer(struct efx_nic *efx,
|
|||||||
state->dma_addr,
|
state->dma_addr,
|
||||||
efx_rx_buf_size(efx),
|
efx_rx_buf_size(efx),
|
||||||
DMA_FROM_DEVICE);
|
DMA_FROM_DEVICE);
|
||||||
|
} else if (used_len) {
|
||||||
|
dma_sync_single_for_cpu(&efx->pci_dev->dev,
|
||||||
|
rx_buf->dma_addr, used_len,
|
||||||
|
DMA_FROM_DEVICE);
|
||||||
}
|
}
|
||||||
} else if (!(rx_buf->flags & EFX_RX_BUF_PAGE) && rx_buf->u.skb) {
|
} else if (!(rx_buf->flags & EFX_RX_BUF_PAGE) && rx_buf->u.skb) {
|
||||||
dma_unmap_single(&efx->pci_dev->dev, rx_buf->dma_addr,
|
dma_unmap_single(&efx->pci_dev->dev, rx_buf->dma_addr,
|
||||||
@@ -269,7 +274,7 @@ static void efx_free_rx_buffer(struct efx_nic *efx,
|
|||||||
static void efx_fini_rx_buffer(struct efx_rx_queue *rx_queue,
|
static void efx_fini_rx_buffer(struct efx_rx_queue *rx_queue,
|
||||||
struct efx_rx_buffer *rx_buf)
|
struct efx_rx_buffer *rx_buf)
|
||||||
{
|
{
|
||||||
efx_unmap_rx_buffer(rx_queue->efx, rx_buf);
|
efx_unmap_rx_buffer(rx_queue->efx, rx_buf, 0);
|
||||||
efx_free_rx_buffer(rx_queue->efx, rx_buf);
|
efx_free_rx_buffer(rx_queue->efx, rx_buf);
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -535,10 +540,10 @@ void efx_rx_packet(struct efx_rx_queue *rx_queue, unsigned int index,
|
|||||||
goto out;
|
goto out;
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Release card resources - assumes all RX buffers consumed in-order
|
/* Release and/or sync DMA mapping - assumes all RX buffers
|
||||||
* per RX queue
|
* consumed in-order per RX queue
|
||||||
*/
|
*/
|
||||||
efx_unmap_rx_buffer(efx, rx_buf);
|
efx_unmap_rx_buffer(efx, rx_buf, len);
|
||||||
|
|
||||||
/* Prefetch nice and early so data will (hopefully) be in cache by
|
/* Prefetch nice and early so data will (hopefully) be in cache by
|
||||||
* the time we look at it.
|
* the time we look at it.
|
||||||
|
|||||||
@@ -731,7 +731,7 @@ static inline void cpsw_add_default_vlan(struct cpsw_priv *priv)
|
|||||||
|
|
||||||
writel(vlan, &priv->host_port_regs->port_vlan);
|
writel(vlan, &priv->host_port_regs->port_vlan);
|
||||||
|
|
||||||
for (i = 0; i < 2; i++)
|
for (i = 0; i < priv->data.slaves; i++)
|
||||||
slave_write(priv->slaves + i, vlan, reg);
|
slave_write(priv->slaves + i, vlan, reg);
|
||||||
|
|
||||||
cpsw_ale_add_vlan(priv->ale, vlan, ALE_ALL_PORTS << port,
|
cpsw_ale_add_vlan(priv->ale, vlan, ALE_ALL_PORTS << port,
|
||||||
|
|||||||
@@ -257,8 +257,7 @@ static struct phy_driver ksphy_driver[] = {
|
|||||||
.phy_id = PHY_ID_KSZ9021,
|
.phy_id = PHY_ID_KSZ9021,
|
||||||
.phy_id_mask = 0x000ffffe,
|
.phy_id_mask = 0x000ffffe,
|
||||||
.name = "Micrel KSZ9021 Gigabit PHY",
|
.name = "Micrel KSZ9021 Gigabit PHY",
|
||||||
.features = (PHY_GBIT_FEATURES | SUPPORTED_Pause
|
.features = (PHY_GBIT_FEATURES | SUPPORTED_Pause),
|
||||||
| SUPPORTED_Asym_Pause),
|
|
||||||
.flags = PHY_HAS_MAGICANEG | PHY_HAS_INTERRUPT,
|
.flags = PHY_HAS_MAGICANEG | PHY_HAS_INTERRUPT,
|
||||||
.config_init = kszphy_config_init,
|
.config_init = kszphy_config_init,
|
||||||
.config_aneg = genphy_config_aneg,
|
.config_aneg = genphy_config_aneg,
|
||||||
|
|||||||
@@ -44,13 +44,13 @@ MODULE_LICENSE("GPL");
|
|||||||
|
|
||||||
void phy_device_free(struct phy_device *phydev)
|
void phy_device_free(struct phy_device *phydev)
|
||||||
{
|
{
|
||||||
kfree(phydev);
|
put_device(&phydev->dev);
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL(phy_device_free);
|
EXPORT_SYMBOL(phy_device_free);
|
||||||
|
|
||||||
static void phy_device_release(struct device *dev)
|
static void phy_device_release(struct device *dev)
|
||||||
{
|
{
|
||||||
phy_device_free(to_phy_device(dev));
|
kfree(to_phy_device(dev));
|
||||||
}
|
}
|
||||||
|
|
||||||
static struct phy_driver genphy_driver;
|
static struct phy_driver genphy_driver;
|
||||||
@@ -201,6 +201,8 @@ struct phy_device *phy_device_create(struct mii_bus *bus, int addr, int phy_id,
|
|||||||
there's no driver _already_ loaded. */
|
there's no driver _already_ loaded. */
|
||||||
request_module(MDIO_MODULE_PREFIX MDIO_ID_FMT, MDIO_ID_ARGS(phy_id));
|
request_module(MDIO_MODULE_PREFIX MDIO_ID_FMT, MDIO_ID_ARGS(phy_id));
|
||||||
|
|
||||||
|
device_initialize(&dev->dev);
|
||||||
|
|
||||||
return dev;
|
return dev;
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL(phy_device_create);
|
EXPORT_SYMBOL(phy_device_create);
|
||||||
@@ -363,9 +365,9 @@ int phy_device_register(struct phy_device *phydev)
|
|||||||
/* Run all of the fixups for this PHY */
|
/* Run all of the fixups for this PHY */
|
||||||
phy_scan_fixups(phydev);
|
phy_scan_fixups(phydev);
|
||||||
|
|
||||||
err = device_register(&phydev->dev);
|
err = device_add(&phydev->dev);
|
||||||
if (err) {
|
if (err) {
|
||||||
pr_err("phy %d failed to register\n", phydev->addr);
|
pr_err("PHY %d failed to add\n", phydev->addr);
|
||||||
goto out;
|
goto out;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -156,6 +156,24 @@ config USB_NET_AX8817X
|
|||||||
This driver creates an interface named "ethX", where X depends on
|
This driver creates an interface named "ethX", where X depends on
|
||||||
what other networking devices you have in use.
|
what other networking devices you have in use.
|
||||||
|
|
||||||
|
config USB_NET_AX88179_178A
|
||||||
|
tristate "ASIX AX88179/178A USB 3.0/2.0 to Gigabit Ethernet"
|
||||||
|
depends on USB_USBNET
|
||||||
|
select CRC32
|
||||||
|
select PHYLIB
|
||||||
|
default y
|
||||||
|
help
|
||||||
|
This option adds support for ASIX AX88179 based USB 3.0/2.0
|
||||||
|
to Gigabit Ethernet adapters.
|
||||||
|
|
||||||
|
This driver should work with at least the following devices:
|
||||||
|
* ASIX AX88179
|
||||||
|
* ASIX AX88178A
|
||||||
|
* Sitcomm LN-032
|
||||||
|
|
||||||
|
This driver creates an interface named "ethX", where X depends on
|
||||||
|
what other networking devices you have in use.
|
||||||
|
|
||||||
config USB_NET_CDCETHER
|
config USB_NET_CDCETHER
|
||||||
tristate "CDC Ethernet support (smart devices such as cable modems)"
|
tristate "CDC Ethernet support (smart devices such as cable modems)"
|
||||||
depends on USB_USBNET
|
depends on USB_USBNET
|
||||||
|
|||||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user