2001-02-10 10:24:03

by Ion Badulescu

[permalink] [raw]
Subject: [PATCH] starfire driver for 2.2.19pre

Hi Alan,

This is basically the same driver I sent to Jeff Garzik and you yesterday,
for 2.4.1. Only one byte is different, in the version string. :-) The
patch was generated against 2.2.18, it applies cleanly to 2.2.19pre9.

Please apply.

Thanks,
Ion

--
It is better to keep your mouth shut and be thought a fool,
than to open it and remove all doubt.

-------------------------------------------
--- /usr/src/local/linux-2.2.19pre9-vanilla/drivers/net/starfire.c Fri Feb 9 20:11:44 2001
+++ linux-2.2.18/drivers/net/starfire.c Fri Feb 9 14:31:50 2001
@@ -0,0 +1,1826 @@
+/* starfire.c: Linux device driver for the Adaptec Starfire network adapter. */
+/*
+ Written 1998-2000 by Donald Becker.
+
+ This software may be used and distributed according to the terms of
+ the GNU General Public License (GPL), incorporated herein by reference.
+ Drivers based on or derived from this code fall under the GPL and must
+ retain the authorship, copyright and license notice. This file is not
+ a complete program and may only be used when the entire operating
+ system is licensed under the GPL.
+
+ The author may be reached as [email protected], or C/O
+ Scyld Computing Corporation
+ 410 Severn Ave., Suite 210
+ Annapolis MD 21403
+
+ Support and updates available at
+ http://www.scyld.com/network/starfire.html
+
+ -----------------------------------------------------------
+
+ Linux kernel-specific changes:
+
+ LK1.1.1 (jgarzik):
+ - Use PCI driver interface
+ - Fix MOD_xxx races
+ - softnet fixups
+
+ LK1.1.2 (jgarzik):
+ - Merge Becker version 0.15
+
+ LK1.1.3 (Andrew Morton)
+ - Timer cleanups
+
+ LK1.1.4 (jgarzik):
+ - Merge Becker version 1.03
+
+ LK1.2.1 (Ion Badulescu <[email protected]>)
+ - Support hardware Rx/Tx checksumming
+ - Use the GFP firmware taken from Adaptec's Netware driver
+
+ LK1.2.2 (Ion Badulescu)
+ - Backported to 2.2.x
+
+ LK1.2.3 (Ion Badulescu <[email protected]>)
+ - Fix the flaky mdio interface
+ - More compat clean-ups
+
+TODO:
+ - implement tx_timeout() properly
+ - support ethtool
+*/
+
+/* These identify the driver base version and may not be removed. */
+static const char version1[] =
+"starfire.c:v1.03 7/26/2000 Written by Donald Becker <[email protected]>\n";
+static const char version2[] =
+" Updates and info at http://www.scyld.com/network/starfire.html\n";
+
+static const char version3[] =
+" (unofficial 2.4.x kernel port, version 1.2.3, February 09, 2001)\n";
+
+/* The user-configurable values.
+ These may be modified when a driver module is loaded.*/
+
+/*
+ * Adaptec's license for their Novell drivers (which is where I got the
+ * firmware files) does not allow to redistribute them. Thus, we can't
+ * include them with this driver.
+ *
+ * However, an end-user is allowed to download and use them, after
+ * converting them to C header files using starfire_firmware.pl.
+ * Once that's done, the #undef must be changed into a #define
+ * for this driver to really use the firmware. Note that Rx/Tx
+ * hardware TCP checksumming is not possible without the firmware.
+ *
+ * I'm currently [Feb 2001] talking to Adaptec about this redistribution
+ * issue. Stay tuned...
+ */
+#undef HAS_FIRMWARE
+/*
+ * The current frame processor firmware fails to checksum a fragment
+ * of length 1. If and when this is fixed, the #define below can be removed.
+ */
+#define HAS_BROKEN_FIRMWARE
+
+/* Used for tuning interrupt latency vs. overhead. */
+static int interrupt_mitigation = 0x0;
+
+static int debug = 1; /* 1 normal messages, 0 quiet .. 7 verbose. */
+static int max_interrupt_work = 20;
+static int mtu = 0;
+/* Maximum number of multicast addresses to filter (vs. rx-all-multicast).
+ The Starfire has a 512 element hash table based on the Ethernet CRC. */
+static int multicast_filter_limit = 32;
+
+#define PKT_BUF_SZ 1536 /* Size of each temporary Rx buffer.*/
+/*
+ * Set the copy breakpoint for the copy-only-tiny-frames scheme.
+ * Setting to > 1518 effectively disables this feature.
+ *
+ * NOTE:
+ * The ia64 doesn't allow for unaligned loads even of integers being
+ * misaligned on a 2 byte boundary. Thus always force copying of
+ * packets as the starfire doesn't allow for misaligned DMAs ;-(
+ * 23/10/2000 - Jes
+ *
+ * Neither does the Alpha. -Ion
+ */
+#if defined(__ia64__) || defined(__alpha__)
+static int rx_copybreak = PKT_BUF_SZ;
+#else
+static int rx_copybreak = 0;
+#endif
+
+/* Used to pass the media type, etc.
+ Both 'options[]' and 'full_duplex[]' exist for driver interoperability.
+ The media type is usually passed in 'options[]'.
+*/
+#define MAX_UNITS 8 /* More are supported, limit only on options */
+static int options[MAX_UNITS] = {-1, -1, -1, -1, -1, -1, -1, -1};
+static int full_duplex[MAX_UNITS] = {-1, -1, -1, -1, -1, -1, -1, -1};
+
+/* Operational parameters that are set at compile time. */
+
+/* The "native" ring sizes are either 256 or 2048.
+ However in some modes a descriptor may be marked to wrap the ring earlier.
+ The driver allocates a single page for each descriptor ring, constraining
+ the maximum size in an architecture-dependent way.
+*/
+#define RX_RING_SIZE 256
+#define TX_RING_SIZE 32
+/* The completion queues are fixed at 1024 entries i.e. 4K or 8KB. */
+#define DONE_Q_SIZE 1024
+
+/* Operational parameters that usually are not changed. */
+/* Time in jiffies before concluding the transmitter is hung. */
+#define TX_TIMEOUT (2*HZ)
+
+#define skb_first_frag_len(skb) (skb->len)
+
+#if !defined(__OPTIMIZE__)
+#warning You must compile this file with the correct options!
+#warning See the last lines of the source file.
+#error You must compile this driver with "-O".
+#endif
+
+#include <linux/version.h>
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/string.h>
+#include <linux/timer.h>
+#include <linux/errno.h>
+#include <linux/ioport.h>
+#include <linux/slab.h>
+#include <linux/interrupt.h>
+#include <linux/pci.h>
+#include <linux/netdevice.h>
+#include <linux/etherdevice.h>
+#include <linux/skbuff.h>
+#include <linux/init.h>
+#include <linux/delay.h>
+#include <asm/processor.h> /* Processor type for cache alignment. */
+#include <asm/bitops.h>
+#include <asm/io.h>
+
+#ifdef HAS_FIRMWARE
+#include "starfire_firmware.h"
+#endif /* HAS_FIRMWARE */
+
+MODULE_AUTHOR("Donald Becker <[email protected]>");
+MODULE_DESCRIPTION("Adaptec Starfire Ethernet driver");
+MODULE_PARM(max_interrupt_work, "i");
+MODULE_PARM(mtu, "i");
+MODULE_PARM(debug, "i");
+MODULE_PARM(rx_copybreak, "i");
+MODULE_PARM(interrupt_mitigation, "i");
+MODULE_PARM(options, "1-" __MODULE_STRING(MAX_UNITS) "i");
+MODULE_PARM(full_duplex, "1-" __MODULE_STRING(MAX_UNITS) "i");
+
+/*
+ Theory of Operation
+
+I. Board Compatibility
+
+This driver is for the Adaptec 6915 "Starfire" 64 bit PCI Ethernet adapter.
+
+II. Board-specific settings
+
+III. Driver operation
+
+IIIa. Ring buffers
+
+The Starfire hardware uses multiple fixed-size descriptor queues/rings. The
+ring sizes are set fixed by the hardware, but may optionally be wrapped
+earlier by the END bit in the descriptor.
+This driver uses that hardware queue size for the Rx ring, where a large
+number of entries has no ill effect beyond increases the potential backlog.
+The Tx ring is wrapped with the END bit, since a large hardware Tx queue
+disables the queue layer priority ordering and we have no mechanism to
+utilize the hardware two-level priority queue. When modifying the
+RX/TX_RING_SIZE pay close attention to page sizes and the ring-empty warning
+levels.
+
+IIIb/c. Transmit/Receive Structure
+
+See the Adaptec manual for the many possible structures, and options for
+each structure. There are far too many to document here.
+
+For transmit this driver uses type 0/1 transmit descriptors (depending
+on the presence of the zerocopy patches), and relies on automatic
+minimum-length padding. It does not use the completion queue
+consumer index, but instead checks for non-zero status entries.
+
+For receive this driver uses type 0 receive descriptors. The driver
+allocates full frame size skbuffs for the Rx ring buffers, so all frames
+should fit in a single descriptor. The driver does not use the completion
+queue consumer index, but instead checks for non-zero status entries.
+
+When an incoming frame is less than RX_COPYBREAK bytes long, a fresh skbuff
+is allocated and the frame is copied to the new skbuff. When the incoming
+frame is larger, the skbuff is passed directly up the protocol stack.
+Buffers consumed this way are replaced by newly allocated skbuffs in a later
+phase of receive.
+
+A notable aspect of operation is that unaligned buffers are not permitted by
+the Starfire hardware. The IP header at offset 14 in an ethernet frame thus
+isn't longword aligned, which may cause problems on some machine
+e.g. Alphas and IA64. For these architectures, the driver is forced to copy
+the frame into a new skbuff unconditionally. Copied frames are put into the
+skbuff at an offset of "+2", thus 16-byte aligning the IP header.
+
+IIId. Synchronization
+
+The driver runs as two independent, single-threaded flows of control. One
+is the send-packet routine, which enforces single-threaded use by the
+dev->tbusy flag. The other thread is the interrupt handler, which is single
+threaded by the hardware and interrupt handling software.
+
+The send packet thread has partial control over the Tx ring and 'dev->tbusy'
+flag. It sets the tbusy flag whenever it's queuing a Tx packet. If the next
+queue slot is empty, it clears the tbusy flag when finished otherwise it sets
+the 'lp->tx_full' flag.
+
+The interrupt handler has exclusive control over the Rx ring and records stats
+from the Tx ring. After reaping the stats, it marks the Tx queue entry as
+empty by incrementing the dirty_tx mark. Iff the 'lp->tx_full' flag is set, it
+clears both the tx_full and tbusy flags.
+
+IV. Notes
+
+IVb. References
+
+The Adaptec Starfire manuals, available only from Adaptec.
+http://www.scyld.com/expert/100mbps.html
+http://www.scyld.com/expert/NWay.html
+
+IVc. Errata
+
+*/
+
+
+
+/* 2.2.x compatibility code */
+#if LINUX_VERSION_CODE < 0x20300
+#include <linux/kcomp.h>
+
+static LIST_HEAD(pci_drivers);
+
+struct pci_driver_mapping {
+ struct pci_dev *dev;
+ struct pci_driver *drv;
+ void *driver_data;
+};
+
+struct pci_device_id {
+ unsigned int vendor, device;
+ unsigned int subvendor, subdevice;
+ unsigned int class, class_mask;
+ unsigned long driver_data;
+};
+
+struct pci_driver {
+ struct list_head node;
+ struct pci_dev *dev;
+ char *name;
+ const struct pci_device_id *id_table; /* NULL if wants all devices */
+ int (*probe)(struct pci_dev *dev, const struct pci_device_id *id); /* New device inserted */
+ void (*remove)(struct pci_dev *dev); /* Device removed (NULL if not a hot-plug capable driver) */
+ void (*suspend)(struct pci_dev *dev); /* Device suspended */
+ void (*resume)(struct pci_dev *dev); /* Device woken up */
+};
+
+#define PCI_MAX_MAPPINGS 16
+static struct pci_driver_mapping drvmap [PCI_MAX_MAPPINGS] = { { NULL, } , };
+
+#define __devinit
+#define __devinitdata
+#define __devexit
+#define MODULE_DEVICE_TABLE(foo,bar)
+#define SET_MODULE_OWNER(dev)
+#define COMPAT_MOD_INC_USE_COUNT MOD_INC_USE_COUNT
+#define COMPAT_MOD_DEC_USE_COUNT MOD_DEC_USE_COUNT
+#define PCI_ANY_ID (~0)
+#define IORESOURCE_MEM 2
+#define PCI_DMA_FROMDEVICE 0
+#define PCI_DMA_TODEVICE 0
+
+#define request_mem_region(addr, size, name) ((void *)1)
+#define release_mem_region(addr, size)
+#define del_timer_sync(timer) del_timer(timer)
+
+static inline void *pci_alloc_consistent(struct pci_dev *hwdev, size_t size,
+ dma_addr_t *dma_handle)
+{
+ void *virt_ptr;
+
+ virt_ptr = kmalloc(size, GFP_KERNEL);
+ *dma_handle = virt_to_bus(virt_ptr);
+ return virt_ptr;
+}
+#define pci_free_consistent(cookie, size, ptr, dma_ptr) kfree(ptr)
+#define pci_map_single(cookie, address, size, dir) virt_to_bus(address)
+#define pci_unmap_single(cookie, address, size, dir)
+#define pci_dma_sync_single(cookie, address, size, dir)
+#undef pci_resource_flags
+#define pci_resource_flags(dev, i) \
+ ((dev->base_address[i] & IORESOURCE_IO) ? IORESOURCE_IO : IORESOURCE_MEM)
+
+void * pci_get_drvdata (struct pci_dev *dev)
+{
+ int i;
+
+ for (i = 0; i < PCI_MAX_MAPPINGS; i++)
+ if (drvmap[i].dev == dev)
+ return drvmap[i].driver_data;
+
+ return NULL;
+}
+
+void pci_set_drvdata (struct pci_dev *dev, void *driver_data)
+{
+ int i;
+
+ for (i = 0; i < PCI_MAX_MAPPINGS; i++)
+ if (drvmap[i].dev == dev) {
+ drvmap[i].driver_data = driver_data;
+ return;
+ }
+}
+
+const struct pci_device_id *
+pci_compat_match_device(const struct pci_device_id *ids, struct pci_dev *dev)
+{
+ u16 subsystem_vendor, subsystem_device;
+
+ pci_read_config_word(dev, PCI_SUBSYSTEM_VENDOR_ID, &subsystem_vendor);
+ pci_read_config_word(dev, PCI_SUBSYSTEM_ID, &subsystem_device);
+
+ while (ids->vendor || ids->subvendor || ids->class_mask) {
+ if ((ids->vendor == PCI_ANY_ID || ids->vendor == dev->vendor) &&
+ (ids->device == PCI_ANY_ID || ids->device == dev->device) &&
+ (ids->subvendor == PCI_ANY_ID || ids->subvendor == subsystem_vendor) &&
+ (ids->subdevice == PCI_ANY_ID || ids->subdevice == subsystem_device) &&
+ !((ids->class ^ dev->class) & ids->class_mask))
+ return ids;
+ ids++;
+ }
+ return NULL;
+}
+
+static int
+pci_announce_device(struct pci_driver *drv, struct pci_dev *dev)
+{
+ const struct pci_device_id *id;
+ int found, i;
+
+ if (drv->id_table) {
+ id = pci_compat_match_device(drv->id_table, dev);
+ if (!id)
+ return 0;
+ } else
+ id = NULL;
+
+ found = 0;
+ for (i = 0; i < PCI_MAX_MAPPINGS && !found; i++)
+ if (!drvmap[i].dev) {
+ drvmap[i].dev = dev;
+ drvmap[i].drv = drv;
+ found = 1;
+ }
+
+ if (drv->probe(dev, id) >= 0) {
+ if(found)
+ return 1;
+ } else {
+ drvmap[i - 1].dev = NULL;
+ }
+ return 0;
+}
+
+int
+pci_register_driver(struct pci_driver *drv)
+{
+ struct pci_dev *dev;
+ int count = 0, found, i;
+#ifdef CONFIG_PCI
+ list_add_tail(&drv->node, &pci_drivers);
+ for (dev = pci_devices; dev; dev = dev->next) {
+ found = 0;
+ for (i = 0; i < PCI_MAX_MAPPINGS && !found; i++)
+ if (drvmap[i].dev == dev)
+ found = 1;
+ if (!found)
+ count += pci_announce_device(drv, dev);
+ }
+#endif
+ return count;
+}
+
+void
+pci_unregister_driver(struct pci_driver *drv)
+{
+ struct pci_dev *dev;
+ int i, found;
+#ifdef CONFIG_PCI
+ list_del(&drv->node);
+ for (dev = pci_devices; dev; dev = dev->next) {
+ found = 0;
+ for (i = 0; i < PCI_MAX_MAPPINGS && !found; i++)
+ if (drvmap[i].dev == dev)
+ found = 1;
+ if (found) {
+ if (drv->remove)
+ drv->remove(dev);
+ drvmap[i - 1].dev = NULL;
+ }
+ }
+#endif
+}
+
+void *compat_request_region (unsigned long start, unsigned long n, const char *name)
+{
+ if (check_region (start, n) != 0)
+ return NULL;
+ request_region (start, n, name);
+ return (void *) 1;
+}
+
+static inline int pci_module_init(struct pci_driver *drv)
+{
+ int rc = pci_register_driver (drv);
+
+ if (rc > 0)
+ return 0;
+
+ /* if we get here, we need to clean up pci driver instance
+ * and return some sort of error */
+ pci_unregister_driver (drv);
+
+ return -ENODEV;
+}
+
+#define init_tx_timer(dev, func, timeout)
+#define kick_tx_timer(dev, func, timeout) \
+ if (netif_queue_stopped(dev)) { \
+ /* If this happens network layer tells us we're broken. */ \
+ if (jiffies - dev->trans_start > timeout) \
+ func(dev); \
+ }
+
+#else /* LINUX_VERSION_CODE > 0x20300 */
+
+#define COMPAT_MOD_INC_USE_COUNT
+#define COMPAT_MOD_DEC_USE_COUNT
+
+#define init_tx_timer(dev, func, timeout) \
+ dev->tx_timeout = func; \
+ dev->watchdog_timeo = timeout;
+#define kick_tx_timer(dev, func, timeout)
+
+
+#endif /* LINUX_VERSION_CODE > 0x20300 */
+
+
+enum chip_capability_flags {CanHaveMII=1, };
+#define PCI_IOTYPE (PCI_USES_MASTER | PCI_USES_MEM | PCI_ADDR0)
+#define MEM_ADDR_SZ 0x80000 /* And maps in 0.5MB(!). */
+
+#if 0
+#define ADDR_64BITS 1 /* This chip uses 64 bit addresses. */
+#endif
+
+#define HAS_IP_COPYSUM 1
+
+enum chipset {
+ CH_6915 = 0,
+};
+
+static struct pci_device_id starfire_pci_tbl[] __devinitdata = {
+ { 0x9004, 0x6915, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_6915 },
+ { 0, }
+};
+MODULE_DEVICE_TABLE(pci, starfire_pci_tbl);
+
+/* A chip capabilities table, matching the CH_xxx entries in xxx_pci_tbl[] above. */
+static struct chip_info {
+ const char *name;
+ int io_size;
+ int drv_flags;
+} netdrv_tbl[] __devinitdata = {
+ { "Adaptec Starfire 6915", MEM_ADDR_SZ, CanHaveMII },
+};
+
+
+/* Offsets to the device registers.
+ Unlike software-only systems, device drivers interact with complex hardware.
+ It's not useful to define symbolic names for every register bit in the
+ device. The name can only partially document the semantics and make
+ the driver longer and more difficult to read.
+ In general, only the important configuration values or bits changed
+ multiple times should be defined symbolically.
+*/
+enum register_offsets {
+ PCIDeviceConfig=0x50040, GenCtrl=0x50070, IntrTimerCtrl=0x50074,
+ IntrClear=0x50080, IntrStatus=0x50084, IntrEnable=0x50088,
+ MIICtrl=0x52000, StationAddr=0x50120, EEPROMCtrl=0x51000,
+ TxDescCtrl=0x50090,
+ TxRingPtr=0x50098, HiPriTxRingPtr=0x50094, /* Low and High priority. */
+ TxRingHiAddr=0x5009C, /* 64 bit address extension. */
+ TxProducerIdx=0x500A0, TxConsumerIdx=0x500A4,
+ TxThreshold=0x500B0,
+ CompletionHiAddr=0x500B4, TxCompletionAddr=0x500B8,
+ RxCompletionAddr=0x500BC, RxCompletionQ2Addr=0x500C0,
+ CompletionQConsumerIdx=0x500C4, RxDMACtrl=0x500D0,
+ RxDescQCtrl=0x500D4, RxDescQHiAddr=0x500DC, RxDescQAddr=0x500E0,
+ RxDescQIdx=0x500E8, RxDMAStatus=0x500F0, RxFilterMode=0x500F4,
+ TxMode=0x55000, TxGfpMem=0x58000, RxGfpMem=0x5a000,
+};
+
+/* Bits in the interrupt status/mask registers. */
+enum intr_status_bits {
+ IntrLinkChange=0xf0000000, IntrStatsMax=0x08000000,
+ IntrAbnormalSummary=0x02000000, IntrGeneralTimer=0x01000000,
+ IntrSoftware=0x800000, IntrRxComplQ1Low=0x400000,
+ IntrTxComplQLow=0x200000, IntrPCI=0x100000,
+ IntrDMAErr=0x080000, IntrTxDataLow=0x040000,
+ IntrRxComplQ2Low=0x020000, IntrRxDescQ1Low=0x010000,
+ IntrNormalSummary=0x8000, IntrTxDone=0x4000,
+ IntrTxDMADone=0x2000, IntrTxEmpty=0x1000,
+ IntrEarlyRxQ2=0x0800, IntrEarlyRxQ1=0x0400,
+ IntrRxQ2Done=0x0200, IntrRxQ1Done=0x0100,
+ IntrRxGFPDead=0x80, IntrRxDescQ2Low=0x40,
+ IntrNoTxCsum=0x20, IntrTxBadID=0x10,
+ IntrHiPriTxBadID=0x08, IntrRxGfp=0x04,
+ IntrTxGfp=0x02, IntrPCIPad=0x01,
+ /* not quite bits */
+ IntrRxDone=IntrRxQ2Done | IntrRxQ1Done,
+ IntrRxEmpty=IntrRxDescQ1Low | IntrRxDescQ2Low,
+};
+
+/* Bits in the RxFilterMode register. */
+enum rx_mode_bits {
+ AcceptBroadcast=0x04, AcceptAllMulticast=0x02, AcceptAll=0x01,
+ AcceptMulticast=0x10, AcceptMyPhys=0xE040,
+};
+
+/* Bits in the TxDescCtrl register. */
+enum tx_ctrl_bits {
+ TxDescSpaceUnlim=0x00, TxDescSpace32=0x10, TxDescSpace64=0x20,
+ TxDescSpace128=0x30, TxDescSpace256=0x40,
+ TxDescType0=0x00, TxDescType1=0x01, TxDescType2=0x02,
+ TxDescType3=0x03, TxDescType4=0x04,
+ TxNoDMACompletion=0x08, TxDescQ64bit=0x80,
+ TxHiPriFIFOThreshShift=24, TxPadLenShift=16,
+ TxDMABurstSizeShift=8,
+};
+
+/* Bits in the RxDescQCtrl register. */
+enum rx_ctrl_bits {
+ RxBufferLenShift=16, RxMinDescrThreshShift=0,
+ RxPrefetchMode=0x8000, Rx2048QEntries=0x4000,
+ RxVariableQ=0x2000, RxDesc64bit=0x1000,
+ RxDescQAddr64bit=0x0100,
+ RxDescSpace4=0x000, RxDescSpace8=0x100,
+ RxDescSpace16=0x200, RxDescSpace32=0x300,
+ RxDescSpace64=0x400, RxDescSpace128=0x500,
+ RxConsumerWrEn=0x80,
+};
+
+/* Bits in the RxCompletionAddr register */
+enum rx_compl_bits {
+ RxComplQAddr64bit=0x80, TxComplProducerWrEn=0x40,
+ RxComplType0=0x00, RxComplType1=0x10,
+ RxComplType2=0x20, RxComplType3=0x30,
+ RxComplThreshShift=0,
+};
+
+/* The Rx and Tx buffer descriptors. */
+struct starfire_rx_desc {
+ u32 rxaddr; /* Optionally 64 bits. */
+};
+enum rx_desc_bits {
+ RxDescValid=1, RxDescEndRing=2,
+};
+
+/* Completion queue entry.
+ You must update the page allocation, init_ring and the shift count in rx()
+ if using a larger format. */
+#ifdef HAS_FIRMWARE
+#define csum_rx_status
+#endif /* HAS_FIRMWARE */
+struct rx_done_desc {
+ u32 status; /* Low 16 bits is length. */
+#ifdef csum_rx_status
+ u32 status2; /* Low 16 bits is csum */
+#endif /* csum_rx_status */
+#ifdef full_rx_status
+ u32 status2;
+ u16 vlanid;
+ u16 csum; /* partial checksum */
+ u32 timestamp;
+#endif /* full_rx_status */
+};
+enum rx_done_bits {
+ RxOK=0x20000000, RxFIFOErr=0x10000000, RxBufQ2=0x08000000,
+};
+
+/* Type 1 Tx descriptor. */
+struct starfire_tx_desc {
+ u32 status; /* Upper bits are status, lower 16 length. */
+ u32 first_addr;
+};
+enum tx_desc_bits {
+ TxDescID=0xB0000000,
+ TxCRCEn=0x01000000, TxDescIntr=0x08000000,
+ TxRingWrap=0x04000000, TxCalTCP=0x02000000,
+};
+struct tx_done_report {
+ u32 status; /* timestamp, index. */
+#if 0
+ u32 intrstatus; /* interrupt status */
+#endif
+};
+
+#define PRIV_ALIGN 15 /* Required alignment mask */
+struct rx_ring_info {
+ struct sk_buff *skb;
+ dma_addr_t mapping;
+};
+struct tx_ring_info {
+ struct sk_buff *skb;
+ dma_addr_t first_mapping;
+};
+
+struct netdev_private {
+ /* Descriptor rings first for alignment. */
+ struct starfire_rx_desc *rx_ring;
+ struct starfire_tx_desc *tx_ring;
+ dma_addr_t rx_ring_dma;
+ dma_addr_t tx_ring_dma;
+ /* The addresses of rx/tx-in-place skbuffs. */
+ struct rx_ring_info rx_info[RX_RING_SIZE];
+ struct tx_ring_info tx_info[TX_RING_SIZE];
+ /* Pointers to completion queues (full pages). I should cache line pad..*/
+ u8 pad0[100];
+ struct rx_done_desc *rx_done_q;
+ dma_addr_t rx_done_q_dma;
+ unsigned int rx_done;
+ struct tx_done_report *tx_done_q;
+ unsigned int tx_done;
+ dma_addr_t tx_done_q_dma;
+ struct net_device_stats stats;
+ struct timer_list timer; /* Media monitoring timer. */
+ struct pci_dev *pci_dev;
+ /* Frequently used values: keep some adjacent for cache effect. */
+ unsigned int cur_rx, dirty_rx; /* Producer/consumer ring indices */
+ unsigned int cur_tx, dirty_tx;
+ unsigned int rx_buf_sz; /* Based on MTU+slack. */
+ unsigned int tx_full:1; /* The Tx queue is full. */
+ /* These values are keep track of the transceiver/media in use. */
+ unsigned int full_duplex:1, /* Full-duplex operation requested. */
+ medialock:1, /* Xcvr set to fixed speed/duplex. */
+ rx_flowctrl:1,
+ tx_flowctrl:1; /* Use 802.3x flow control. */
+ unsigned int default_port:4; /* Last dev->if_port value. */
+ u32 tx_mode;
+ u8 tx_threshold;
+ /* MII transceiver section. */
+ int mii_cnt; /* MII device addresses. */
+ u16 advertising; /* NWay media advertisement */
+ unsigned char phys[2]; /* MII device addresses. */
+};
+
+static int mdio_read(struct net_device *dev, int phy_id, int location);
+static void mdio_write(struct net_device *dev, int phy_id, int location, int value);
+static int netdev_open(struct net_device *dev);
+static void check_duplex(struct net_device *dev, int startup);
+static void netdev_timer(unsigned long data);
+static void tx_timeout(struct net_device *dev);
+static void init_ring(struct net_device *dev);
+static int start_tx(struct sk_buff *skb, struct net_device *dev);
+static void intr_handler(int irq, void *dev_instance, struct pt_regs *regs);
+static void netdev_error(struct net_device *dev, int intr_status);
+static int netdev_rx(struct net_device *dev);
+static void netdev_error(struct net_device *dev, int intr_status);
+static void set_rx_mode(struct net_device *dev);
+static struct net_device_stats *get_stats(struct net_device *dev);
+static int mii_ioctl(struct net_device *dev, struct ifreq *rq, int cmd);
+static int netdev_close(struct net_device *dev);
+
+
+
+static int __devinit starfire_init_one(struct pci_dev *pdev,
+ const struct pci_device_id *ent)
+{
+ struct netdev_private *np;
+ int i, irq, option, chip_idx = ent->driver_data;
+ struct net_device *dev;
+ static int card_idx = -1;
+ static int printed_version = 0;
+ long ioaddr;
+ int drv_flags, io_size;
+ int boguscnt;
+
+ card_idx++;
+ option = card_idx < MAX_UNITS ? options[card_idx] : 0;
+
+ if (!printed_version++)
+ printk(KERN_INFO "%s" KERN_INFO "%s" KERN_INFO "%s",
+ version1, version2, version3);
+
+ if (pci_enable_device (pdev))
+ return -EIO;
+
+ ioaddr = pci_resource_start (pdev, 0);
+ io_size = pci_resource_len (pdev, 0);
+ if (!ioaddr || ((pci_resource_flags (pdev, 0) & IORESOURCE_MEM) == 0)) {
+ printk (KERN_ERR "starfire %d: no PCI MEM resources, aborting\n", card_idx);
+ return -ENODEV;
+ }
+
+ dev = init_etherdev(NULL, sizeof(*np));
+ if (!dev) {
+ printk (KERN_ERR "starfire %d: cannot alloc etherdev, aborting\n", card_idx);
+ return -ENOMEM;
+ }
+ SET_MODULE_OWNER(dev);
+
+ irq = pdev->irq;
+
+ if (request_mem_region (ioaddr, io_size, dev->name) == NULL) {
+ printk (KERN_ERR "starfire %d: resource 0x%x @ 0x%lx busy, aborting\n",
+ card_idx, io_size, ioaddr);
+ goto err_out_free_netdev;
+ }
+
+ ioaddr = (long) ioremap (ioaddr, io_size);
+ if (!ioaddr) {
+ printk (KERN_ERR "starfire %d: cannot remap 0x%x @ 0x%lx, aborting\n",
+ card_idx, io_size, ioaddr);
+ goto err_out_free_res;
+ }
+
+ pci_set_master (pdev);
+
+ printk(KERN_INFO "%s: %s at 0x%lx, ",
+ dev->name, netdrv_tbl[chip_idx].name, ioaddr);
+
+ /* Serial EEPROM reads are hidden by the hardware. */
+ for (i = 0; i < 6; i++)
+ dev->dev_addr[i] = readb(ioaddr + EEPROMCtrl + 20-i);
+ for (i = 0; i < 5; i++)
+ printk("%2.2x:", dev->dev_addr[i]);
+ printk("%2.2x, IRQ %d.\n", dev->dev_addr[i], irq);
+
+#if ! defined(final_version) /* Dump the EEPROM contents during development. */
+ if (debug > 4)
+ for (i = 0; i < 0x20; i++)
+ printk("%2.2x%s",
+ (unsigned int)readb(ioaddr + EEPROMCtrl + i),
+ i % 16 != 15 ? " " : "\n");
+#endif
+
+ /* Issue soft reset */
+ writel(0x8000, ioaddr + TxMode);
+ udelay(1000);
+ writel(0, ioaddr + TxMode);
+
+ /* Reset the chip to erase previous misconfiguration. */
+ writel(1, ioaddr + PCIDeviceConfig);
+ boguscnt = 1000;
+ while (--boguscnt > 0) {
+ udelay(10);
+ if ((readl(ioaddr + PCIDeviceConfig) & 1) == 0)
+ break;
+ }
+ if (boguscnt == 0)
+ printk("%s: chipset reset never completed!\n", dev->name);
+ /* wait a little longer */
+ udelay(1000);
+
+ dev->base_addr = ioaddr;
+ dev->irq = irq;
+
+ np = dev->priv;
+ pci_set_drvdata(pdev, dev);
+
+ np->pci_dev = pdev;
+ drv_flags = netdrv_tbl[chip_idx].drv_flags;
+
+ if (dev->mem_start)
+ option = dev->mem_start;
+
+ /* The lower four bits are the media type. */
+ if (option > 0) {
+ if (option & 0x200)
+ np->full_duplex = 1;
+ np->default_port = option & 15;
+ if (np->default_port)
+ np->medialock = 1;
+ }
+ if (card_idx < MAX_UNITS && full_duplex[card_idx] > 0)
+ np->full_duplex = 1;
+
+ if (np->full_duplex)
+ np->medialock = 1;
+
+ /* The chip-specific entries in the device structure. */
+ dev->open = &netdev_open;
+ dev->hard_start_xmit = &start_tx;
+ init_tx_timer(dev, tx_timeout, TX_TIMEOUT);
+ dev->stop = &netdev_close;
+ dev->get_stats = &get_stats;
+ dev->set_multicast_list = &set_rx_mode;
+ dev->do_ioctl = &mii_ioctl;
+
+ if (mtu)
+ dev->mtu = mtu;
+
+ if (drv_flags & CanHaveMII) {
+ int phy, phy_idx = 0;
+ int mii_status;
+ for (phy = 0; phy < 32 && phy_idx < 4; phy++) {
+ mdio_write(dev, phy, 0, 0x8000);
+ udelay(500);
+ boguscnt = 1000;
+ while (--boguscnt > 0)
+ if ((mdio_read(dev, phy, 0) & 0x8000) == 0)
+ break;
+ if (boguscnt == 0) {
+ printk("%s: PHY reset never completed!\n", dev->name);
+ continue;
+ }
+ mii_status = mdio_read(dev, phy, 1);
+ if (mii_status != 0x0000) {
+ np->phys[phy_idx++] = phy;
+ np->advertising = mdio_read(dev, phy, 4);
+ printk(KERN_INFO "%s: MII PHY found at address %d, status "
+ "0x%4.4x advertising %4.4x.\n",
+ dev->name, phy, mii_status, np->advertising);
+ /* there can be only one PHY on-board */
+ break;
+ }
+ }
+ np->mii_cnt = phy_idx;
+ }
+
+ return 0;
+
+err_out_free_res:
+ release_mem_region (ioaddr, io_size);
+err_out_free_netdev:
+ unregister_netdev (dev);
+ kfree (dev);
+ return -ENODEV;
+}
+
+
+/* Read the MII Management Data I/O (MDIO) interfaces. */
+
+static int mdio_read(struct net_device *dev, int phy_id, int location)
+{
+ long mdio_addr = dev->base_addr + MIICtrl + (phy_id<<7) + (location<<2);
+ int result, boguscnt=1000;
+ /* ??? Should we add a busy-wait here? */
+ do
+ result = readl(mdio_addr);
+ while ((result & 0xC0000000) != 0x80000000 && --boguscnt > 0);
+ if (boguscnt == 0)
+ return 0;
+ if ((result & 0xffff) == 0xffff)
+ return 0;
+ return result & 0xffff;
+}
+
+static void mdio_write(struct net_device *dev, int phy_id, int location, int value)
+{
+ long mdio_addr = dev->base_addr + MIICtrl + (phy_id<<7) + (location<<2);
+ writel(value, mdio_addr);
+ /* The busy-wait will occur before a read. */
+ return;
+}
+
+
+static int netdev_open(struct net_device *dev)
+{
+ struct netdev_private *np = dev->priv;
+ long ioaddr = dev->base_addr;
+ int i, retval;
+
+ /* Do we ever need to reset the chip??? */
+
+ COMPAT_MOD_INC_USE_COUNT;
+
+ retval = request_irq(dev->irq, &intr_handler, SA_SHIRQ, dev->name, dev);
+ if (retval) {
+ COMPAT_MOD_DEC_USE_COUNT;
+ return retval;
+ }
+
+ /* Disable the Rx and Tx, and reset the chip. */
+ writel(0, ioaddr + GenCtrl);
+ writel(1, ioaddr + PCIDeviceConfig);
+ if (debug > 1)
+ printk(KERN_DEBUG "%s: netdev_open() irq %d.\n",
+ dev->name, dev->irq);
+ /* Allocate the various queues, failing gracefully. */
+ if (np->tx_done_q == 0)
+ np->tx_done_q = pci_alloc_consistent(np->pci_dev, PAGE_SIZE, &np->tx_done_q_dma);
+ if (np->rx_done_q == 0)
+ np->rx_done_q = pci_alloc_consistent(np->pci_dev, sizeof(struct rx_done_desc) * DONE_Q_SIZE, &np->rx_done_q_dma);
+ if (np->tx_ring == 0)
+ np->tx_ring = pci_alloc_consistent(np->pci_dev, PAGE_SIZE, &np->tx_ring_dma);
+ if (np->rx_ring == 0)
+ np->rx_ring = pci_alloc_consistent(np->pci_dev, PAGE_SIZE, &np->rx_ring_dma);
+ if (np->tx_done_q == 0 || np->rx_done_q == 0
+ || np->rx_ring == 0 || np->tx_ring == 0) {
+ if (np->tx_done_q)
+ pci_free_consistent(np->pci_dev, PAGE_SIZE,
+ np->tx_done_q, np->tx_done_q_dma);
+ if (np->rx_done_q)
+ pci_free_consistent(np->pci_dev, sizeof(struct rx_done_desc) * DONE_Q_SIZE,
+ np->rx_done_q, np->rx_done_q_dma);
+ if (np->tx_ring)
+ pci_free_consistent(np->pci_dev, PAGE_SIZE,
+ np->tx_ring, np->tx_ring_dma);
+ if (np->rx_ring)
+ pci_free_consistent(np->pci_dev, PAGE_SIZE,
+ np->rx_ring, np->rx_ring_dma);
+ COMPAT_MOD_DEC_USE_COUNT;
+ return -ENOMEM;
+ }
+
+ init_ring(dev);
+ /* Set the size of the Rx buffers. */
+ writel((np->rx_buf_sz << RxBufferLenShift) |
+ (0 << RxMinDescrThreshShift) |
+ RxPrefetchMode | RxVariableQ |
+ RxDescSpace4,
+ ioaddr + RxDescQCtrl);
+
+ /* Set Tx descriptor to type 1 and padding to 0 bytes. */
+ writel((2 << TxHiPriFIFOThreshShift) |
+ (0 << TxPadLenShift) |
+ (4 << TxDMABurstSizeShift) |
+ TxDescSpaceUnlim | TxDescType1,
+ ioaddr + TxDescCtrl);
+
+#if defined(ADDR_64BITS) && defined(__alpha__)
+ /* XXX We really need a 64-bit PCI dma interfaces too... -DaveM */
+ writel(np->rx_ring_dma >> 32, ioaddr + RxDescQHiAddr);
+ writel(np->tx_ring_dma >> 32, ioaddr + TxRingHiAddr);
+#else
+ writel(0, ioaddr + RxDescQHiAddr);
+ writel(0, ioaddr + TxRingHiAddr);
+ writel(0, ioaddr + CompletionHiAddr);
+#endif
+ writel(np->rx_ring_dma, ioaddr + RxDescQAddr);
+ writel(np->tx_ring_dma, ioaddr + TxRingPtr);
+
+ writel(np->tx_done_q_dma, ioaddr + TxCompletionAddr);
+#ifdef full_rx_status
+ writel(np->rx_done_q_dma |
+ RxComplType3 |
+ (0 << RxComplThreshShift),
+ ioaddr + RxCompletionAddr);
+#else /* not full_rx_status */
+#ifdef csum_rx_status
+ writel(np->rx_done_q_dma |
+ RxComplType2 |
+ (0 << RxComplThreshShift),
+ ioaddr + RxCompletionAddr);
+#else /* not csum_rx_status */
+ writel(np->rx_done_q_dma |
+ RxComplType0 |
+ (0 << RxComplThreshShift),
+ ioaddr + RxCompletionAddr);
+#endif /* not csum_rx_status */
+#endif /* not full_rx_status */
+
+ if (debug > 1)
+ printk(KERN_DEBUG "%s: Filling in the station address.\n", dev->name);
+
+ /* Fill both the unused Tx SA register and the Rx perfect filter. */
+ for (i = 0; i < 6; i++)
+ writeb(dev->dev_addr[i], ioaddr + StationAddr + 5-i);
+ for (i = 0; i < 16; i++) {
+ u16 *eaddrs = (u16 *)dev->dev_addr;
+ long setup_frm = ioaddr + 0x56000 + i*16;
+ writew(cpu_to_be16(eaddrs[2]), setup_frm); setup_frm += 4;
+ writew(cpu_to_be16(eaddrs[1]), setup_frm); setup_frm += 4;
+ writew(cpu_to_be16(eaddrs[0]), setup_frm); setup_frm += 8;
+ }
+
+ /* Initialize other registers. */
+ /* Configure the PCI bus bursts and FIFO thresholds. */
+ np->tx_mode = 0; /* Initialized when TxMode set. */
+ np->tx_threshold = 4;
+ writel(np->tx_threshold, ioaddr + TxThreshold);
+ writel(interrupt_mitigation, ioaddr + IntrTimerCtrl);
+
+ if (dev->if_port == 0)
+ dev->if_port = np->default_port;
+
+ netif_start_queue(dev);
+
+ if (debug > 1)
+ printk(KERN_DEBUG "%s: Setting the Rx and Tx modes.\n", dev->name);
+ set_rx_mode(dev);
+
+ np->advertising = mdio_read(dev, np->phys[0], 4);
+ check_duplex(dev, 1);
+
+ /* Set the interrupt mask and enable PCI interrupts. */
+ writel(IntrRxDone | IntrRxEmpty | IntrDMAErr |
+ IntrTxDone | IntrStatsMax | IntrLinkChange |
+ IntrNormalSummary | IntrAbnormalSummary |
+ IntrRxGFPDead | IntrNoTxCsum | IntrTxBadID,
+ ioaddr + IntrEnable);
+ writel(0x00800000 | readl(ioaddr + PCIDeviceConfig),
+ ioaddr + PCIDeviceConfig);
+
+#ifdef HAS_FIRMWARE
+ /* Load Rx/Tx firmware into the frame processors */
+ for (i = 0; i < FIRMWARE_RX_SIZE * 2; i++)
+ writel(cpu_to_le32(firmware_rx[i]), ioaddr + RxGfpMem + i * 4);
+ for (i = 0; i < FIRMWARE_TX_SIZE * 2; i++)
+ writel(cpu_to_le32(firmware_tx[i]), ioaddr + TxGfpMem + i * 4);
+ /* Enable the Rx and Tx units, and the Rx/Tx frame processors. */
+ writel(0x003F, ioaddr + GenCtrl);
+#else /* not HAS_FIRMWARE */
+ /* Enable the Rx and Tx units only. */
+ writel(0x000F, ioaddr + GenCtrl);
+#endif /* not HAS_FIRMWARE */
+
+ if (debug > 2)
+ printk(KERN_DEBUG "%s: Done netdev_open().\n",
+ dev->name);
+
+ /* Set the timer to check for link beat. */
+ init_timer(&np->timer);
+ np->timer.expires = jiffies + 3*HZ;
+ np->timer.data = (unsigned long)dev;
+ np->timer.function = &netdev_timer; /* timer handler */
+ add_timer(&np->timer);
+
+ return 0;
+}
+
+static void check_duplex(struct net_device *dev, int startup)
+{
+ struct netdev_private *np = dev->priv;
+ long ioaddr = dev->base_addr;
+ int new_tx_mode ;
+
+ new_tx_mode = 0x0C04 | (np->tx_flowctrl ? 0x0800:0)
+ | (np->rx_flowctrl ? 0x0400:0);
+ if (np->medialock) {
+ if (np->full_duplex)
+ new_tx_mode |= 2;
+ } else {
+ int mii_reg5 = mdio_read(dev, np->phys[0], 5);
+ int negotiated = mii_reg5 & np->advertising;
+ int duplex = (negotiated & 0x0100) || (negotiated & 0x01C0) == 0x0040;
+ if (duplex)
+ new_tx_mode |= 2;
+ if (np->full_duplex != duplex) {
+ np->full_duplex = duplex;
+ if (debug > 1)
+ printk(KERN_INFO "%s: Setting %s-duplex based on MII #%d"
+ " negotiated capability %4.4x.\n", dev->name,
+ duplex ? "full" : "half", np->phys[0], negotiated);
+ }
+ }
+ if (new_tx_mode != np->tx_mode) {
+ np->tx_mode = new_tx_mode;
+ writel(np->tx_mode | 0x8000, ioaddr + TxMode);
+ writel(np->tx_mode, ioaddr + TxMode);
+ }
+}
+
+static void netdev_timer(unsigned long data)
+{
+ struct net_device *dev = (struct net_device *)data;
+ struct netdev_private *np = dev->priv;
+ long ioaddr = dev->base_addr;
+ int next_tick = 60*HZ; /* Check before driver release. */
+
+ if (debug > 3) {
+ printk(KERN_DEBUG "%s: Media selection timer tick, status %8.8x.\n",
+ dev->name, (int)readl(ioaddr + IntrStatus));
+ }
+ check_duplex(dev, 0);
+#if ! defined(final_version)
+ /* This is often falsely triggered. */
+ if (readl(ioaddr + IntrStatus) & 1) {
+ int new_status = readl(ioaddr + IntrStatus);
+ /* Bogus hardware IRQ: Fake an interrupt handler call. */
+ if (new_status & 1) {
+ printk(KERN_ERR "%s: Interrupt blocked, status %8.8x/%8.8x.\n",
+ dev->name, new_status, (int)readl(ioaddr + IntrStatus));
+ intr_handler(dev->irq, dev, 0);
+ }
+ }
+#endif
+
+ np->timer.expires = jiffies + next_tick;
+ add_timer(&np->timer);
+}
+
+static void tx_timeout(struct net_device *dev)
+{
+ struct netdev_private *np = dev->priv;
+ long ioaddr = dev->base_addr;
+
+ printk(KERN_WARNING "%s: Transmit timed out, status %8.8x,"
+ " resetting...\n", dev->name, (int)readl(ioaddr + IntrStatus));
+
+#ifndef __alpha__
+ {
+ int i;
+ printk(KERN_DEBUG " Rx ring %p: ", np->rx_ring);
+ for (i = 0; i < RX_RING_SIZE; i++)
+ printk(" %8.8x", (unsigned int)le32_to_cpu(np->rx_ring[i].rxaddr));
+ printk("\n"KERN_DEBUG" Tx ring %p: ", np->tx_ring);
+ for (i = 0; i < TX_RING_SIZE; i++)
+ printk(" %4.4x", le32_to_cpu(np->tx_ring[i].status));
+ printk("\n");
+ }
+#endif
+
+ /* Perhaps we should reinitialize the hardware here. */
+ dev->if_port = 0;
+ /* Stop and restart the chip's Tx processes . */
+
+ /* Trigger an immediate transmit demand. */
+
+ dev->trans_start = jiffies;
+ np->stats.tx_errors++;
+ netif_wake_queue(dev);
+}
+
+
+/* Initialize the Rx and Tx rings, along with various 'dev' bits. */
+static void init_ring(struct net_device *dev)
+{
+ struct netdev_private *np = dev->priv;
+ int i;
+
+ np->tx_full = 0;
+ np->cur_rx = np->cur_tx = 0;
+ np->dirty_rx = np->rx_done = np->dirty_tx = np->tx_done = 0;
+
+ np->rx_buf_sz = (dev->mtu <= 1500 ? PKT_BUF_SZ : dev->mtu + 32);
+
+ /* Fill in the Rx buffers. Handle allocation failure gracefully. */
+ for (i = 0; i < RX_RING_SIZE; i++) {
+ struct sk_buff *skb = dev_alloc_skb(np->rx_buf_sz);
+ np->rx_info[i].skb = skb;
+ if (skb == NULL)
+ break;
+ np->rx_info[i].mapping = pci_map_single(np->pci_dev, skb->tail, np->rx_buf_sz, PCI_DMA_FROMDEVICE);
+ skb->dev = dev; /* Mark as being used by this device. */
+ /* Grrr, we cannot offset to correctly align the IP header. */
+ np->rx_ring[i].rxaddr = cpu_to_le32(np->rx_info[i].mapping | RxDescValid);
+ }
+ writew(i - 1, dev->base_addr + RxDescQIdx);
+ np->dirty_rx = (unsigned int)(i - RX_RING_SIZE);
+
+ /* Clear the remainder of the Rx buffer ring. */
+ for ( ; i < RX_RING_SIZE; i++) {
+ np->rx_ring[i].rxaddr = 0;
+ np->rx_info[i].skb = NULL;
+ np->rx_info[i].mapping = 0;
+ }
+ /* Mark the last entry as wrapping the ring. */
+ np->rx_ring[i-1].rxaddr |= cpu_to_le32(RxDescEndRing);
+
+ /* Clear the completion rings. */
+ for (i = 0; i < DONE_Q_SIZE; i++) {
+ np->rx_done_q[i].status = 0;
+ np->tx_done_q[i].status = 0;
+ }
+
+ for (i = 0; i < TX_RING_SIZE; i++) {
+ np->tx_info[i].skb = NULL;
+ np->tx_info[i].first_mapping = 0;
+ np->tx_ring[i].status = 0;
+ }
+ return;
+}
+
+static int start_tx(struct sk_buff *skb, struct net_device *dev)
+{
+ struct netdev_private *np = dev->priv;
+ unsigned int entry;
+
+ kick_tx_timer(dev, tx_timeout, TX_TIMEOUT);
+
+ /* Caution: the write order is important here, set the field
+ with the "ownership" bits last. */
+
+ /* Calculate the next Tx descriptor entry. */
+ entry = np->cur_tx % TX_RING_SIZE;
+
+ np->tx_info[entry].skb = skb;
+ np->tx_info[entry].first_mapping =
+ pci_map_single(np->pci_dev, skb->data, skb_first_frag_len(skb), PCI_DMA_TODEVICE);
+
+ np->tx_ring[entry].first_addr = cpu_to_le32(np->tx_info[entry].first_mapping);
+ /* Add "| TxDescIntr" to generate Tx-done interrupts. */
+ np->tx_ring[entry].status = cpu_to_le32(skb->len | TxDescID | TxCRCEn | 1 << 16);
+
+ if (entry >= TX_RING_SIZE-1) /* Wrap ring */
+ np->tx_ring[entry].status |= cpu_to_le32(TxRingWrap | TxDescIntr);
+
+ if (debug > 5) {
+ printk(KERN_DEBUG "%s: Tx #%d slot %d status %8.8x.\n",
+ dev->name, np->cur_tx, entry,
+ le32_to_cpu(np->tx_ring[entry].status));
+ }
+
+ np->cur_tx++;
+
+ if (entry >= TX_RING_SIZE-1) /* Wrap ring */
+ entry = -1;
+ entry++;
+
+ /* Non-x86: explicitly flush descriptor cache lines here. */
+ /* Ensure everything is written back above before the transmit is
+ initiated. - Jes */
+ wmb();
+
+ /* Update the producer index. */
+ writel(entry * (sizeof(struct starfire_tx_desc) / 8), dev->base_addr + TxProducerIdx);
+
+ if (np->cur_tx - np->dirty_tx >= TX_RING_SIZE - 1) {
+ np->tx_full = 1;
+ netif_stop_queue(dev);
+ }
+
+ dev->trans_start = jiffies;
+
+ return 0;
+}
+
+/* The interrupt handler does all of the Rx thread work and cleans up
+ after the Tx thread. */
+static void intr_handler(int irq, void *dev_instance, struct pt_regs *rgs)
+{
+ struct net_device *dev = (struct net_device *)dev_instance;
+ struct netdev_private *np;
+ long ioaddr;
+ int boguscnt = max_interrupt_work;
+ int consumer;
+ int tx_status;
+
+#ifndef final_version /* Can never occur. */
+ if (dev == NULL) {
+ printk (KERN_ERR "Netdev interrupt handler(): IRQ %d for unknown device.\n", irq);
+ return;
+ }
+#endif
+
+ ioaddr = dev->base_addr;
+ np = dev->priv;
+
+ do {
+ u32 intr_status = readl(ioaddr + IntrClear);
+
+ if (debug > 4)
+ printk(KERN_DEBUG "%s: Interrupt status %4.4x.\n",
+ dev->name, intr_status);
+
+ if (intr_status == 0)
+ break;
+
+ if (intr_status & IntrRxDone)
+ netdev_rx(dev);
+
+ /* Scavenge the skbuff list based on the Tx-done queue.
+ There are redundant checks here that may be cleaned up
+ after the driver has proven to be reliable. */
+ consumer = readl(ioaddr + TxConsumerIdx);
+ if (debug > 4)
+ printk(KERN_DEBUG "%s: Tx Consumer index is %d.\n",
+ dev->name, consumer);
+#if 0
+ if (np->tx_done >= 250 || np->tx_done == 0)
+ printk(KERN_DEBUG "%s: Tx completion entry %d is %8.8x, %d is %8.8x.\n",
+ dev->name, np->tx_done,
+ le32_to_cpu(np->tx_done_q[np->tx_done].status),
+ (np->tx_done+1) & (DONE_Q_SIZE-1),
+ le32_to_cpu(np->tx_done_q[(np->tx_done+1)&(DONE_Q_SIZE-1)].status));
+#endif
+
+ while ((tx_status = le32_to_cpu(np->tx_done_q[np->tx_done].status)) != 0) {
+ if (debug > 4)
+ printk(KERN_DEBUG "%s: Tx completion entry %d is %8.8x.\n",
+ dev->name, np->tx_done, tx_status);
+ if ((tx_status & 0xe0000000) == 0xa0000000) {
+ np->stats.tx_packets++;
+ } else if ((tx_status & 0xe0000000) == 0x80000000) {
+ struct sk_buff *skb;
+ u16 entry = tx_status; /* Implicit truncate */
+ entry /= sizeof(struct starfire_tx_desc);
+
+ skb = np->tx_info[entry].skb;
+ np->tx_info[entry].skb = NULL;
+ pci_unmap_single(np->pci_dev,
+ np->tx_info[entry].first_mapping,
+ skb_first_frag_len(skb),
+ PCI_DMA_TODEVICE);
+ np->tx_info[entry].first_mapping = 0;
+
+ /* Scavenge the descriptor. */
+ dev_kfree_skb_irq(skb);
+
+ np->dirty_tx++;
+ }
+ np->tx_done_q[np->tx_done].status = 0;
+ np->tx_done = (np->tx_done+1) & (DONE_Q_SIZE-1);
+ }
+ writew(np->tx_done, ioaddr + CompletionQConsumerIdx + 2);
+
+ if (np->tx_full && np->cur_tx - np->dirty_tx < TX_RING_SIZE - 4) {
+ /* The ring is no longer full, wake the queue. */
+ np->tx_full = 0;
+ netif_wake_queue(dev);
+ }
+
+ /* Abnormal error summary/uncommon events handlers. */
+ if (intr_status & IntrAbnormalSummary)
+ netdev_error(dev, intr_status);
+
+ if (--boguscnt < 0) {
+ printk(KERN_WARNING "%s: Too much work at interrupt, "
+ "status=0x%4.4x.\n",
+ dev->name, intr_status);
+ break;
+ }
+ } while (1);
+
+ if (debug > 4)
+ printk(KERN_DEBUG "%s: exiting interrupt, status=%#4.4x.\n",
+ dev->name, (int)readl(ioaddr + IntrStatus));
+
+#ifndef final_version
+ /* Code that should never be run! Remove after testing.. */
+ {
+ static int stopit = 10;
+ if (!netif_running(dev) && --stopit < 0) {
+ printk(KERN_ERR "%s: Emergency stop, looping startup interrupt.\n",
+ dev->name);
+ free_irq(irq, dev);
+ }
+ }
+#endif
+}
+
+/* This routine is logically part of the interrupt handler, but separated
+ for clarity and better register allocation. */
+static int netdev_rx(struct net_device *dev)
+{
+ struct netdev_private *np = dev->priv;
+ int boguscnt = np->dirty_rx + RX_RING_SIZE - np->cur_rx;
+ u32 desc_status;
+
+ if (np->rx_done_q == 0) {
+ printk(KERN_ERR "%s: rx_done_q is NULL! rx_done is %d. %p.\n",
+ dev->name, np->rx_done, np->tx_done_q);
+ return 0;
+ }
+
+ /* If EOP is set on the next entry, it's a new packet. Send it up. */
+ while ((desc_status = le32_to_cpu(np->rx_done_q[np->rx_done].status)) != 0) {
+ struct sk_buff *skb;
+ u16 pkt_len;
+ int entry;
+
+ if (debug > 4)
+ printk(KERN_DEBUG " netdev_rx() status of %d was %8.8x.\n", np->rx_done, desc_status);
+ if (--boguscnt < 0)
+ break;
+ if ( ! (desc_status & RxOK)) {
+ /* There was a error. */
+ if (debug > 2)
+ printk(KERN_DEBUG " netdev_rx() Rx error was %8.8x.\n", desc_status);
+ np->stats.rx_errors++;
+ if (desc_status & RxFIFOErr)
+ np->stats.rx_fifo_errors++;
+ goto next_rx;
+ }
+
+ pkt_len = desc_status; /* Implicitly Truncate */
+ entry = (desc_status >> 16) & 0x7ff;
+
+#ifndef final_version
+ if (debug > 4)
+ printk(KERN_DEBUG " netdev_rx() normal Rx pkt length %d, bogus_cnt %d.\n", pkt_len, boguscnt);
+#endif
+ /* Check if the packet is long enough to accept without copying
+ to a minimally-sized skbuff. */
+ if (pkt_len < rx_copybreak
+ && (skb = dev_alloc_skb(pkt_len + 2)) != NULL) {
+ skb->dev = dev;
+ skb_reserve(skb, 2); /* 16 byte align the IP header */
+ pci_dma_sync_single(np->pci_dev,
+ np->rx_info[entry].mapping,
+ pkt_len, PCI_DMA_FROMDEVICE);
+#if HAS_IP_COPYSUM /* Call copy + cksum if available. */
+ eth_copy_and_sum(skb, np->rx_info[entry].skb->tail, pkt_len, 0);
+ skb_put(skb, pkt_len);
+#else
+ memcpy(skb_put(skb, pkt_len), np->rx_info[entry].skb->tail, pkt_len);
+#endif
+ } else {
+ char *temp;
+
+ pci_unmap_single(np->pci_dev, np->rx_info[entry].mapping, np->rx_buf_sz, PCI_DMA_FROMDEVICE);
+ skb = np->rx_info[entry].skb;
+ temp = skb_put(skb, pkt_len);
+ np->rx_info[entry].skb = NULL;
+ np->rx_info[entry].mapping = 0;
+ }
+#ifndef final_version /* Remove after testing. */
+ /* You will want this info for the initial debug. */
+ if (debug > 5)
+ printk(KERN_DEBUG " Rx data %2.2x:%2.2x:%2.2x:%2.2x:%2.2x:"
+ "%2.2x %2.2x:%2.2x:%2.2x:%2.2x:%2.2x:%2.2x %2.2x%2.2x "
+ "%d.%d.%d.%d.\n",
+ skb->data[0], skb->data[1], skb->data[2], skb->data[3],
+ skb->data[4], skb->data[5], skb->data[6], skb->data[7],
+ skb->data[8], skb->data[9], skb->data[10],
+ skb->data[11], skb->data[12], skb->data[13],
+ skb->data[14], skb->data[15], skb->data[16],
+ skb->data[17]);
+#endif
+ skb->protocol = eth_type_trans(skb, dev);
+#if defined(full_rx_status) || defined(csum_rx_status)
+ if (le32_to_cpu(np->rx_done_q[np->rx_done].status2) & 0x01000000) {
+ skb->ip_summed = CHECKSUM_UNNECESSARY;
+ }
+ /*
+ * This feature doesn't seem to be working, at least
+ * with the two firmware versions I have. If the GFP sees
+ * a fragment, it either ignores it completely, or reports
+ * "bad checksum" on it.
+ *
+ * Maybe I missed something -- corrections are welcome.
+ * Until then, the printk stays. :-) -Ion
+ */
+ else if (le32_to_cpu(np->rx_done_q[np->rx_done].status2) & 0x00400000) {
+ skb->ip_summed = CHECKSUM_HW;
+ skb->csum = le32_to_cpu(np->rx_done_q[np->rx_done].status2) & 0xffff;
+ printk(KERN_DEBUG "%s: checksum_hw, status2 = %x\n", dev->name, np->rx_done_q[np->rx_done].status2);
+ }
+#endif
+ netif_rx(skb);
+ dev->last_rx = jiffies;
+ np->stats.rx_packets++;
+
+next_rx:
+ np->cur_rx++;
+ np->rx_done_q[np->rx_done].status = 0;
+ np->rx_done = (np->rx_done + 1) & (DONE_Q_SIZE-1);
+ }
+ writew(np->rx_done, dev->base_addr + CompletionQConsumerIdx);
+
+ /* Refill the Rx ring buffers. */
+ for (; np->cur_rx - np->dirty_rx > 0; np->dirty_rx++) {
+ struct sk_buff *skb;
+ int entry = np->dirty_rx % RX_RING_SIZE;
+ if (np->rx_info[entry].skb == NULL) {
+ skb = dev_alloc_skb(np->rx_buf_sz);
+ np->rx_info[entry].skb = skb;
+ if (skb == NULL)
+ break; /* Better luck next round. */
+ np->rx_info[entry].mapping =
+ pci_map_single(np->pci_dev, skb->tail, np->rx_buf_sz, PCI_DMA_FROMDEVICE);
+ skb->dev = dev; /* Mark as being used by this device. */
+ np->rx_ring[entry].rxaddr =
+ cpu_to_le32(np->rx_info[entry].mapping | RxDescValid);
+ }
+ if (entry == RX_RING_SIZE - 1)
+ np->rx_ring[entry].rxaddr |= cpu_to_le32(RxDescEndRing);
+ /* We could defer this until later... */
+ writew(entry, dev->base_addr + RxDescQIdx);
+ }
+
+ if (debug > 5
+ || memcmp(np->pad0, np->pad0 + 1, sizeof(np->pad0) -1))
+ printk(KERN_DEBUG " exiting netdev_rx() status of %d was %8.8x %d.\n",
+ np->rx_done, desc_status,
+ memcmp(np->pad0, np->pad0 + 1, sizeof(np->pad0) -1));
+
+ /* Restart Rx engine if stopped. */
+ return 0;
+}
+
+static void netdev_error(struct net_device *dev, int intr_status)
+{
+ struct netdev_private *np = dev->priv;
+
+ if (intr_status & IntrLinkChange) {
+ printk(KERN_NOTICE "%s: Link changed: Autonegotiation advertising"
+ " %4.4x, partner %4.4x.\n", dev->name,
+ mdio_read(dev, np->phys[0], 4),
+ mdio_read(dev, np->phys[0], 5));
+ check_duplex(dev, 0);
+ }
+ if (intr_status & IntrStatsMax) {
+ get_stats(dev);
+ }
+ /* Came close to underrunning the Tx FIFO, increase threshold. */
+ if (intr_status & IntrTxDataLow)
+ writel(++np->tx_threshold, dev->base_addr + TxThreshold);
+ if ((intr_status & ~(IntrAbnormalSummary|IntrLinkChange|IntrStatsMax|IntrTxDataLow|1)) && debug)
+ printk(KERN_ERR "%s: Something Wicked happened! %4.4x.\n",
+ dev->name, intr_status);
+ /* Hmmmmm, it's not clear how to recover from DMA faults. */
+ if (intr_status & IntrDMAErr)
+ np->stats.tx_fifo_errors++;
+}
+
+static struct net_device_stats *get_stats(struct net_device *dev)
+{
+ long ioaddr = dev->base_addr;
+ struct netdev_private *np = dev->priv;
+
+ /* This adapter architecture needs no SMP locks. */
+ np->stats.tx_bytes = readl(ioaddr + 0x57010);
+ np->stats.rx_bytes = readl(ioaddr + 0x57044);
+ np->stats.tx_packets = readl(ioaddr + 0x57000);
+ np->stats.tx_aborted_errors =
+ readl(ioaddr + 0x57024) + readl(ioaddr + 0x57028);
+ np->stats.tx_window_errors = readl(ioaddr + 0x57018);
+ np->stats.collisions =
+ readl(ioaddr + 0x57004) + readl(ioaddr + 0x57008);
+
+ /* The chip only need report frame silently dropped. */
+ np->stats.rx_dropped += readw(ioaddr + RxDMAStatus);
+ writew(0, ioaddr + RxDMAStatus);
+ np->stats.rx_crc_errors = readl(ioaddr + 0x5703C);
+ np->stats.rx_frame_errors = readl(ioaddr + 0x57040);
+ np->stats.rx_length_errors = readl(ioaddr + 0x57058);
+ np->stats.rx_missed_errors = readl(ioaddr + 0x5707C);
+
+ return &np->stats;
+}
+
+/* The little-endian AUTODIN II ethernet CRC calculations.
+ A big-endian version is also available.
+ This is slow but compact code. Do not use this routine for bulk data,
+ use a table-based routine instead.
+ This is common code and should be moved to net/core/crc.c.
+ Chips may use the upper or lower CRC bits, and may reverse and/or invert
+ them. Select the endian-ness that results in minimal calculations.
+*/
+static unsigned const ethernet_polynomial_le = 0xedb88320U;
+static inline unsigned ether_crc_le(int length, unsigned char *data)
+{
+ unsigned int crc = 0xffffffff; /* Initial value. */
+ while(--length >= 0) {
+ unsigned char current_octet = *data++;
+ int bit;
+ for (bit = 8; --bit >= 0; current_octet >>= 1) {
+ if ((crc ^ current_octet) & 1) {
+ crc >>= 1;
+ crc ^= ethernet_polynomial_le;
+ } else
+ crc >>= 1;
+ }
+ }
+ return crc;
+}
+
+static void set_rx_mode(struct net_device *dev)
+{
+ long ioaddr = dev->base_addr;
+ u32 rx_mode;
+ struct dev_mc_list *mclist;
+ int i;
+
+ if (dev->flags & IFF_PROMISC) { /* Set promiscuous. */
+ /* Unconditionally log net taps. */
+ printk(KERN_NOTICE "%s: Promiscuous mode enabled.\n", dev->name);
+ rx_mode = AcceptBroadcast|AcceptAllMulticast|AcceptAll|AcceptMyPhys;
+ } else if ((dev->mc_count > multicast_filter_limit)
+ || (dev->flags & IFF_ALLMULTI)) {
+ /* Too many to match, or accept all multicasts. */
+ rx_mode = AcceptBroadcast|AcceptAllMulticast|AcceptMyPhys;
+ } else if (dev->mc_count <= 15) {
+ /* Use the 16 element perfect filter. */
+ long filter_addr = ioaddr + 0x56000 + 1*16;
+ for (i = 1, mclist = dev->mc_list; mclist && i <= dev->mc_count;
+ i++, mclist = mclist->next) {
+ u16 *eaddrs = (u16 *)mclist->dmi_addr;
+ writew(cpu_to_be16(eaddrs[2]), filter_addr); filter_addr += 4;
+ writew(cpu_to_be16(eaddrs[1]), filter_addr); filter_addr += 4;
+ writew(cpu_to_be16(eaddrs[0]), filter_addr); filter_addr += 8;
+ }
+ while (i++ < 16) {
+ writew(0xffff, filter_addr); filter_addr += 4;
+ writew(0xffff, filter_addr); filter_addr += 4;
+ writew(0xffff, filter_addr); filter_addr += 8;
+ }
+ rx_mode = AcceptBroadcast | AcceptMyPhys;
+ } else {
+ /* Must use a multicast hash table. */
+ long filter_addr;
+ u16 mc_filter[32] __attribute__ ((aligned(sizeof(long)))); /* Multicast hash filter */
+
+ memset(mc_filter, 0, sizeof(mc_filter));
+ for (i = 0, mclist = dev->mc_list; mclist && i < dev->mc_count;
+ i++, mclist = mclist->next) {
+ set_bit(ether_crc_le(ETH_ALEN, mclist->dmi_addr) >> 23, mc_filter);
+ }
+ /* Clear the perfect filter list. */
+ filter_addr = ioaddr + 0x56000 + 1*16;
+ for (i = 1; i < 16; i++) {
+ writew(0xffff, filter_addr); filter_addr += 4;
+ writew(0xffff, filter_addr); filter_addr += 4;
+ writew(0xffff, filter_addr); filter_addr += 8;
+ }
+ for (filter_addr=ioaddr + 0x56100, i=0; i < 32; filter_addr+= 16, i++)
+ writew(mc_filter[i], filter_addr);
+ rx_mode = AcceptBroadcast | AcceptMulticast | AcceptMyPhys;
+ }
+ writel(rx_mode, ioaddr + RxFilterMode);
+}
+
+static int mii_ioctl(struct net_device *dev, struct ifreq *rq, int cmd)
+{
+ struct netdev_private *np = dev->priv;
+ u16 *data = (u16 *)&rq->ifr_data;
+
+ switch(cmd) {
+ case SIOCDEVPRIVATE: /* Get the address of the PHY in use. */
+ data[0] = np->phys[0] & 0x1f;
+ /* Fall Through */
+ case SIOCDEVPRIVATE+1: /* Read the specified MII register. */
+ data[3] = mdio_read(dev, data[0] & 0x1f, data[1] & 0x1f);
+ return 0;
+ case SIOCDEVPRIVATE+2: /* Write the specified MII register */
+ if (!capable(CAP_NET_ADMIN))
+ return -EPERM;
+ if (data[0] == np->phys[0]) {
+ u16 value = data[2];
+ switch (data[1]) {
+ case 0:
+ if (value & 0x9000) /* Autonegotiation. */
+ np->medialock = 0;
+ else {
+ np->full_duplex = (value & 0x0100) ? 1 : 0;
+ np->medialock = 1;
+ }
+ break;
+ case 4: np->advertising = value; break;
+ }
+ check_duplex(dev, 0);
+ }
+ mdio_write(dev, data[0] & 0x1f, data[1] & 0x1f, data[2]);
+ return 0;
+ default:
+ return -EOPNOTSUPP;
+ }
+}
+
+static int netdev_close(struct net_device *dev)
+{
+ long ioaddr = dev->base_addr;
+ struct netdev_private *np = dev->priv;
+ int i;
+
+ netif_device_detach(dev);
+
+ del_timer_sync(&np->timer);
+
+ if (debug > 1) {
+ printk(KERN_DEBUG "%s: Shutting down ethercard, Intr status %4.4x.\n",
+ dev->name, (int)readl(ioaddr + IntrStatus));
+ printk(KERN_DEBUG "%s: Queue pointers were Tx %d / %d, Rx %d / %d.\n",
+ dev->name, np->cur_tx, np->dirty_tx, np->cur_rx, np->dirty_rx);
+ }
+
+ /* Disable interrupts by clearing the interrupt mask. */
+ writel(0, ioaddr + IntrEnable);
+
+ /* Stop the chip's Tx and Rx processes. */
+
+#ifdef __i386__
+ if (debug > 2) {
+ printk("\n"KERN_DEBUG" Tx ring at %8.8x:\n",
+ np->tx_ring_dma);
+ for (i = 0; i < 8 /* TX_RING_SIZE is huge! */; i++)
+ printk(KERN_DEBUG " #%d desc. %8.8x %8.8x -> %8.8x.\n",
+ i, le32_to_cpu(np->tx_ring[i].status),
+ le32_to_cpu(np->tx_ring[i].first_addr),
+ le32_to_cpu(np->tx_done_q[i].status));
+ printk(KERN_DEBUG " Rx ring at %8.8x -> %p:\n",
+ np->rx_ring_dma, np->rx_done_q);
+ if (np->rx_done_q)
+ for (i = 0; i < 8 /* RX_RING_SIZE */; i++) {
+ printk(KERN_DEBUG " #%d desc. %8.8x -> %8.8x\n",
+ i, le32_to_cpu(np->rx_ring[i].rxaddr), le32_to_cpu(np->rx_done_q[i].status));
+ }
+ }
+#endif /* __i386__ debugging only */
+
+ free_irq(dev->irq, dev);
+
+ /* Free all the skbuffs in the Rx queue. */
+ for (i = 0; i < RX_RING_SIZE; i++) {
+ np->rx_ring[i].rxaddr = cpu_to_le32(0xBADF00D0); /* An invalid address. */
+ if (np->rx_info[i].skb != NULL) {
+ pci_unmap_single(np->pci_dev, np->rx_info[i].mapping, np->rx_buf_sz, PCI_DMA_FROMDEVICE);
+ dev_kfree_skb(np->rx_info[i].skb);
+ }
+ np->rx_info[i].skb = NULL;
+ np->rx_info[i].mapping = 0;
+ }
+ for (i = 0; i < TX_RING_SIZE; i++) {
+ struct sk_buff *skb = np->tx_info[i].skb;
+ if (skb == NULL)
+ continue;
+ pci_unmap_single(np->pci_dev,
+ np->tx_info[i].first_mapping,
+ skb_first_frag_len(skb), PCI_DMA_TODEVICE);
+ np->tx_info[i].first_mapping = 0;
+ dev_kfree_skb(skb);
+ np->tx_info[i].skb = NULL;
+ }
+
+ COMPAT_MOD_DEC_USE_COUNT;
+
+ return 0;
+}
+
+
+static void __devexit starfire_remove_one (struct pci_dev *pdev)
+{
+ struct net_device *dev = pci_get_drvdata(pdev);
+ struct netdev_private *np;
+
+ if (!dev)
+ BUG();
+
+ np = dev->priv;
+
+ unregister_netdev(dev);
+ iounmap((char *)dev->base_addr);
+
+ release_mem_region(pci_resource_start (pdev, 0),
+ pci_resource_len (pdev, 0));
+
+ if (np->tx_done_q)
+ pci_free_consistent(np->pci_dev, PAGE_SIZE,
+ np->tx_done_q, np->tx_done_q_dma);
+ if (np->rx_done_q)
+ pci_free_consistent(np->pci_dev, PAGE_SIZE,
+ np->rx_done_q, np->rx_done_q_dma);
+ if (np->tx_ring)
+ pci_free_consistent(np->pci_dev, PAGE_SIZE,
+ np->tx_ring, np->tx_ring_dma);
+ if (np->rx_ring)
+ pci_free_consistent(np->pci_dev, PAGE_SIZE,
+ np->rx_ring, np->rx_ring_dma);
+
+ kfree(dev);
+}
+
+
+static struct pci_driver starfire_driver = {
+ name: "starfire",
+ probe: starfire_init_one,
+ remove: starfire_remove_one,
+ id_table: starfire_pci_tbl,
+};
+
+
+static int __init starfire_init (void)
+{
+ return pci_module_init (&starfire_driver);
+}
+
+
+static void __exit starfire_cleanup (void)
+{
+ pci_unregister_driver (&starfire_driver);
+}
+
+
+module_init(starfire_init);
+module_exit(starfire_cleanup);
+
+
+/*
+ * Local variables:
+ * compile-command: "gcc -DMODULE -Wall -Wstrict-prototypes -O6 -c starfire.c"
+ * simple-compile-command: "gcc -DMODULE -O6 -c starfire.c"
+ * c-basic-offset: 8
+ * tab-width: 8
+ * End:
+ */
--- /usr/src/local/linux-2.2.19pre9-vanilla/drivers/net/starfire_firmware.pl Fri Feb 9 20:11:48 2001
+++ linux-2.2.18/drivers/net/starfire_firmware.pl Wed Feb 7 17:59:17 2001
@@ -0,0 +1,31 @@
+#!/usr/bin/perl
+
+# This script can be used to generate a new starfire_firmware.h
+# from GFP_RX.DAT and GFP_TX.DAT, files included with the DDK
+# and also with the Novell drivers.
+
+open FW, "GFP_RX.DAT" || die;
+open FWH, ">starfire_firmware.h" || die;
+
+printf(FWH "static u32 firmware_rx[] = {\n");
+$counter = 0;
+while ($foo = <FW>) {
+ chomp;
+ printf(FWH " 0x%s, 0x0000%s,\n", substr($foo, 4, 8), substr($foo, 0, 4));
+ $counter++;
+}
+
+close FW;
+open FW, "GFP_TX.DAT" || die;
+
+printf(FWH "};\t/* %d Rx instructions */\n#define FIRMWARE_RX_SIZE %d\n\nstatic u32 firmware_tx[] = {\n", $counter, $counter);
+$counter = 0;
+while ($foo = <FW>) {
+ chomp;
+ printf(FWH " 0x%s, 0x0000%s,\n", substr($foo, 4, 8), substr($foo, 0, 4));
+ $counter++;
+}
+
+close FW;
+printf(FWH "};\t/* %d Tx instructions */\n#define FIRMWARE_TX_SIZE %d\n", $counter, $counter);
+close(FWH);
--- /usr/src/local/linux-2.2.19pre9-vanilla/MAINTAINERS Fri Feb 9 20:10:42 2001
+++ linux-2.2.18/MAINTAINERS Wed Feb 7 19:34:25 2001
@@ -937,6 +915,11 @@
M: [email protected]
W: http://www.stallion.com
S: Supported
+
+STARFIRE/DURALAN NETWORK DRIVER
+P: Ion Badulescu
+M: [email protected]
+S: Maintained

STARMODE RADIO IP (STRIP) PROTOCOL DRIVER
W: http://mosquitonet.Stanford.EDU/strip.html
--- /usr/src/local/linux-2.2.19pre9-vanilla/Documentation/Configure.help Fri Feb 9 20:10:42 2001
+++ linux-2.2.18/Documentation/Configure.help Wed Feb 7 19:40:42 2001
@@ -6314,6 +6238,18 @@

If you don't have this card, of course say N.

+Adaptec Starfire support (EXPERIMENTAL)
+CONFIG_ADAPTEC_STARFIRE
+ Say Y here if you have an Adaptec Starfire (or DuraLAN) PCI network
+ adapter. The DuraLAN chip is used on the 64 bit PCI boards from
+ Adaptec e.g. the ANA-6922A. The older 32 bit boards use the tulip
+ driver.
+
+ If you want to compile this driver as a module ( = code which can be
+ inserted in and removed from the running kernel whenever you want),
+ say M here and read Documentation/modules.txt. This is recommended.
+ The module will be called starfire.o.
+
Alteon AceNIC/3Com 3C985/NetGear GA620 Gigabit support
CONFIG_ACENIC
Say Y here if you have an Alteon AceNIC or 3Com 3C985 PCI Gigabit


2001-02-12 09:22:58

by Ion Badulescu

[permalink] [raw]
Subject: [PATCH] new version of the starfire driver for 2.2.19pre

On Sat, 10 Feb 2001, Ion Badulescu wrote:

> Hi Alan,
>
> This is basically the same driver I sent to Jeff Garzik and you yesterday,
> for 2.4.1. Only one byte is different, in the version string. :-) The
> patch was generated against 2.2.18, it applies cleanly to 2.2.19pre9.

And here is a new version, which fixes the initialization for the
compiled-in case and also includes the Config.in and Makefile patches
(which I forgot to diff last time).

By the way, is there a particular reason why drivers/net doesn't allow the
2.4 method of initializing compiled-in drivers?

Thanks,
Ion

--
It is better to keep your mouth shut and be thought a fool,
than to open it and remove all doubt.
--------------------------------
--- /usr/src/local/linux-2.2.18-vanilla/MAINTAINERS Sun Feb 11 15:41:53 2001
+++ linux-2.2.18/MAINTAINERS Wed Feb 7 19:34:25 2001
@@ -916,6 +916,11 @@
W: http://www.stallion.com
S: Supported

+STARFIRE/DURALAN NETWORK DRIVER
+P: Ion Badulescu
+M: [email protected]
+S: Maintained
+
STARMODE RADIO IP (STRIP) PROTOCOL DRIVER
W: http://mosquitonet.Stanford.EDU/strip.html
S: Unsupported ?
--- /usr/src/local/linux-2.2.18-vanilla/Documentation/Configure.help Sun Feb 11 15:41:53 2001
+++ linux-2.2.18/Documentation/Configure.help Wed Feb 7 19:40:42 2001
@@ -6238,6 +6238,18 @@

If you don't have this card, of course say N.

+Adaptec Starfire support (EXPERIMENTAL)
+CONFIG_ADAPTEC_STARFIRE
+ Say Y here if you have an Adaptec Starfire (or DuraLAN) PCI network
+ adapter. The DuraLAN chip is used on the 64 bit PCI boards from
+ Adaptec e.g. the ANA-6922A. The older 32 bit boards use the tulip
+ driver.
+
+ If you want to compile this driver as a module ( = code which can be
+ inserted in and removed from the running kernel whenever you want),
+ say M here and read Documentation/modules.txt. This is recommended.
+ The module will be called starfire.o.
+
Alteon AceNIC/3Com 3C985/NetGear GA620 Gigabit support
CONFIG_ACENIC
Say Y here if you have an Alteon AceNIC or 3Com 3C985 PCI Gigabit
--- /usr/src/local/linux-2.2.18-vanilla/drivers/net/Config.in Sun Feb 11 15:44:07 2001
+++ linux-2.2.18/drivers/net/Config.in Wed Feb 7 17:56:02 2001
@@ -132,6 +132,7 @@
if [ "$CONFIG_NET_EISA" = "y" ]; then
tristate 'AMD PCnet32 (VLB and PCI) support' CONFIG_PCNET32
if [ "$CONFIG_EXPERIMENTAL" = "y" ]; then
+ tristate 'Adaptec Starfire support (EXPERIMENTAL)' CONFIG_ADAPTEC_STARFIRE
tristate 'Ansel Communications EISA 3200 support (EXPERIMENTAL)' CONFIG_AC3200
fi
tristate 'Apricot Xen-II on board Ethernet' CONFIG_APRICOT
--- /usr/src/local/linux-2.2.18-vanilla/drivers/net/Makefile Sun Feb 11 15:44:07 2001
+++ linux-2.2.18/drivers/net/Makefile Sun Feb 11 14:51:10 2001
@@ -742,6 +742,14 @@
endif
endif

+ifeq ($(CONFIG_ADAPTEC_STARFIRE),y)
+L_OBJS += starfire.o
+else
+ ifeq ($(CONFIG_ADAPTEC_STARFIRE),m)
+ M_OBJS += starfire.o
+ endif
+endif
+
ifeq ($(CONFIG_AC3200),y)
L_OBJS += ac3200.o
CONFIG_8390_BUILTIN = y
--- /usr/src/local/linux-2.2.18-vanilla/drivers/net/starfire.c Sun Feb 11 15:43:10 2001
+++ linux-2.2.18/drivers/net/starfire.c Sun Feb 11 16:52:50 2001
@@ -0,0 +1,1841 @@
+/* starfire.c: Linux device driver for the Adaptec Starfire network adapter. */
+/*
+ Written 1998-2000 by Donald Becker.
+
+ This software may be used and distributed according to the terms of
+ the GNU General Public License (GPL), incorporated herein by reference.
+ Drivers based on or derived from this code fall under the GPL and must
+ retain the authorship, copyright and license notice. This file is not
+ a complete program and may only be used when the entire operating
+ system is licensed under the GPL.
+
+ The author may be reached as [email protected], or C/O
+ Scyld Computing Corporation
+ 410 Severn Ave., Suite 210
+ Annapolis MD 21403
+
+ Support and updates available at
+ http://www.scyld.com/network/starfire.html
+
+ -----------------------------------------------------------
+
+ Linux kernel-specific changes:
+
+ LK1.1.1 (jgarzik):
+ - Use PCI driver interface
+ - Fix MOD_xxx races
+ - softnet fixups
+
+ LK1.1.2 (jgarzik):
+ - Merge Becker version 0.15
+
+ LK1.1.3 (Andrew Morton)
+ - Timer cleanups
+
+ LK1.1.4 (jgarzik):
+ - Merge Becker version 1.03
+
+ LK1.2.1 (Ion Badulescu <[email protected]>)
+ - Support hardware Rx/Tx checksumming
+ - Use the GFP firmware taken from Adaptec's Netware driver
+
+ LK1.2.2 (Ion Badulescu)
+ - Backported to 2.2.x
+
+ LK1.2.3 (Ion Badulescu)
+ - Fix the flaky mdio interface
+ - More compat clean-ups
+
+ LK1.2.4 (Ion Badulescu)
+ - More 2.2.x initialization fixes
+
+TODO:
+ - implement tx_timeout() properly
+ - support ethtool
+*/
+
+/* These identify the driver base version and may not be removed. */
+static const char version1[] =
+"starfire.c:v1.03 7/26/2000 Written by Donald Becker <[email protected]>\n";
+static const char version2[] =
+" Updates and info at http://www.scyld.com/network/starfire.html\n";
+
+static const char version3[] =
+" (unofficial 2.2.x kernel port, version 1.2.4, February 11, 2001)\n";
+
+/* The user-configurable values.
+ These may be modified when a driver module is loaded.*/
+
+/*
+ * Adaptec's license for their Novell drivers (which is where I got the
+ * firmware files) does not allow to redistribute them. Thus, we can't
+ * include them with this driver.
+ *
+ * However, an end-user is allowed to download and use them, after
+ * converting them to C header files using starfire_firmware.pl.
+ * Once that's done, the #undef must be changed into a #define
+ * for this driver to really use the firmware. Note that Rx/Tx
+ * hardware TCP checksumming is not possible without the firmware.
+ *
+ * I'm currently [Feb 2001] talking to Adaptec about this redistribution
+ * issue. Stay tuned...
+ */
+#undef HAS_FIRMWARE
+/*
+ * The current frame processor firmware fails to checksum a fragment
+ * of length 1. If and when this is fixed, the #define below can be removed.
+ */
+#define HAS_BROKEN_FIRMWARE
+
+/* Used for tuning interrupt latency vs. overhead. */
+static int interrupt_mitigation = 0x0;
+
+static int debug = 1; /* 1 normal messages, 0 quiet .. 7 verbose. */
+static int max_interrupt_work = 20;
+static int mtu = 0;
+/* Maximum number of multicast addresses to filter (vs. rx-all-multicast).
+ The Starfire has a 512 element hash table based on the Ethernet CRC. */
+static int multicast_filter_limit = 32;
+
+#define PKT_BUF_SZ 1536 /* Size of each temporary Rx buffer.*/
+/*
+ * Set the copy breakpoint for the copy-only-tiny-frames scheme.
+ * Setting to > 1518 effectively disables this feature.
+ *
+ * NOTE:
+ * The ia64 doesn't allow for unaligned loads even of integers being
+ * misaligned on a 2 byte boundary. Thus always force copying of
+ * packets as the starfire doesn't allow for misaligned DMAs ;-(
+ * 23/10/2000 - Jes
+ *
+ * Neither does the Alpha. -Ion
+ */
+#if defined(__ia64__) || defined(__alpha__)
+static int rx_copybreak = PKT_BUF_SZ;
+#else
+static int rx_copybreak = 0;
+#endif
+
+/* Used to pass the media type, etc.
+ Both 'options[]' and 'full_duplex[]' exist for driver interoperability.
+ The media type is usually passed in 'options[]'.
+*/
+#define MAX_UNITS 8 /* More are supported, limit only on options */
+static int options[MAX_UNITS] = {-1, -1, -1, -1, -1, -1, -1, -1};
+static int full_duplex[MAX_UNITS] = {-1, -1, -1, -1, -1, -1, -1, -1};
+
+/* Operational parameters that are set at compile time. */
+
+/* The "native" ring sizes are either 256 or 2048.
+ However in some modes a descriptor may be marked to wrap the ring earlier.
+ The driver allocates a single page for each descriptor ring, constraining
+ the maximum size in an architecture-dependent way.
+*/
+#define RX_RING_SIZE 256
+#define TX_RING_SIZE 32
+/* The completion queues are fixed at 1024 entries i.e. 4K or 8KB. */
+#define DONE_Q_SIZE 1024
+
+/* Operational parameters that usually are not changed. */
+/* Time in jiffies before concluding the transmitter is hung. */
+#define TX_TIMEOUT (2*HZ)
+
+#define skb_first_frag_len(skb) (skb->len)
+
+#if !defined(__OPTIMIZE__)
+#warning You must compile this file with the correct options!
+#warning See the last lines of the source file.
+#error You must compile this driver with "-O".
+#endif
+
+#include <linux/version.h>
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/string.h>
+#include <linux/timer.h>
+#include <linux/errno.h>
+#include <linux/ioport.h>
+#include <linux/slab.h>
+#include <linux/interrupt.h>
+#include <linux/pci.h>
+#include <linux/netdevice.h>
+#include <linux/etherdevice.h>
+#include <linux/skbuff.h>
+#include <linux/init.h>
+#include <linux/delay.h>
+#include <asm/processor.h> /* Processor type for cache alignment. */
+#include <asm/bitops.h>
+#include <asm/io.h>
+
+#ifdef HAS_FIRMWARE
+#include "starfire_firmware.h"
+#endif /* HAS_FIRMWARE */
+
+MODULE_AUTHOR("Donald Becker <[email protected]>");
+MODULE_DESCRIPTION("Adaptec Starfire Ethernet driver");
+MODULE_PARM(max_interrupt_work, "i");
+MODULE_PARM(mtu, "i");
+MODULE_PARM(debug, "i");
+MODULE_PARM(rx_copybreak, "i");
+MODULE_PARM(interrupt_mitigation, "i");
+MODULE_PARM(options, "1-" __MODULE_STRING(MAX_UNITS) "i");
+MODULE_PARM(full_duplex, "1-" __MODULE_STRING(MAX_UNITS) "i");
+
+/*
+ Theory of Operation
+
+I. Board Compatibility
+
+This driver is for the Adaptec 6915 "Starfire" 64 bit PCI Ethernet adapter.
+
+II. Board-specific settings
+
+III. Driver operation
+
+IIIa. Ring buffers
+
+The Starfire hardware uses multiple fixed-size descriptor queues/rings. The
+ring sizes are set fixed by the hardware, but may optionally be wrapped
+earlier by the END bit in the descriptor.
+This driver uses that hardware queue size for the Rx ring, where a large
+number of entries has no ill effect beyond increases the potential backlog.
+The Tx ring is wrapped with the END bit, since a large hardware Tx queue
+disables the queue layer priority ordering and we have no mechanism to
+utilize the hardware two-level priority queue. When modifying the
+RX/TX_RING_SIZE pay close attention to page sizes and the ring-empty warning
+levels.
+
+IIIb/c. Transmit/Receive Structure
+
+See the Adaptec manual for the many possible structures, and options for
+each structure. There are far too many to document here.
+
+For transmit this driver uses type 0/1 transmit descriptors (depending
+on the presence of the zerocopy patches), and relies on automatic
+minimum-length padding. It does not use the completion queue
+consumer index, but instead checks for non-zero status entries.
+
+For receive this driver uses type 0 receive descriptors. The driver
+allocates full frame size skbuffs for the Rx ring buffers, so all frames
+should fit in a single descriptor. The driver does not use the completion
+queue consumer index, but instead checks for non-zero status entries.
+
+When an incoming frame is less than RX_COPYBREAK bytes long, a fresh skbuff
+is allocated and the frame is copied to the new skbuff. When the incoming
+frame is larger, the skbuff is passed directly up the protocol stack.
+Buffers consumed this way are replaced by newly allocated skbuffs in a later
+phase of receive.
+
+A notable aspect of operation is that unaligned buffers are not permitted by
+the Starfire hardware. The IP header at offset 14 in an ethernet frame thus
+isn't longword aligned, which may cause problems on some machine
+e.g. Alphas and IA64. For these architectures, the driver is forced to copy
+the frame into a new skbuff unconditionally. Copied frames are put into the
+skbuff at an offset of "+2", thus 16-byte aligning the IP header.
+
+IIId. Synchronization
+
+The driver runs as two independent, single-threaded flows of control. One
+is the send-packet routine, which enforces single-threaded use by the
+dev->tbusy flag. The other thread is the interrupt handler, which is single
+threaded by the hardware and interrupt handling software.
+
+The send packet thread has partial control over the Tx ring and 'dev->tbusy'
+flag. It sets the tbusy flag whenever it's queuing a Tx packet. If the next
+queue slot is empty, it clears the tbusy flag when finished otherwise it sets
+the 'lp->tx_full' flag.
+
+The interrupt handler has exclusive control over the Rx ring and records stats
+from the Tx ring. After reaping the stats, it marks the Tx queue entry as
+empty by incrementing the dirty_tx mark. Iff the 'lp->tx_full' flag is set, it
+clears both the tx_full and tbusy flags.
+
+IV. Notes
+
+IVb. References
+
+The Adaptec Starfire manuals, available only from Adaptec.
+http://www.scyld.com/expert/100mbps.html
+http://www.scyld.com/expert/NWay.html
+
+IVc. Errata
+
+*/
+
+
+
+/* 2.2.x compatibility code */
+#if LINUX_VERSION_CODE < 0x20300
+#include <linux/kcomp.h>
+
+static LIST_HEAD(pci_drivers);
+
+struct pci_driver_mapping {
+ struct pci_dev *dev;
+ struct pci_driver *drv;
+ void *driver_data;
+};
+
+struct pci_device_id {
+ unsigned int vendor, device;
+ unsigned int subvendor, subdevice;
+ unsigned int class, class_mask;
+ unsigned long driver_data;
+};
+
+struct pci_driver {
+ struct list_head node;
+ struct pci_dev *dev;
+ char *name;
+ const struct pci_device_id *id_table; /* NULL if wants all devices */
+ int (*probe)(struct pci_dev *dev, const struct pci_device_id *id); /* New device inserted */
+ void (*remove)(struct pci_dev *dev); /* Device removed (NULL if not a hot-plug capable driver) */
+ void (*suspend)(struct pci_dev *dev); /* Device suspended */
+ void (*resume)(struct pci_dev *dev); /* Device woken up */
+};
+
+#define PCI_MAX_MAPPINGS 16
+static struct pci_driver_mapping drvmap [PCI_MAX_MAPPINGS] = { { NULL, } , };
+
+#define __devinit __init
+#define __devinitdata __initdata
+#define __devexit
+#define MODULE_DEVICE_TABLE(foo,bar)
+#define SET_MODULE_OWNER(dev)
+#define COMPAT_MOD_INC_USE_COUNT MOD_INC_USE_COUNT
+#define COMPAT_MOD_DEC_USE_COUNT MOD_DEC_USE_COUNT
+#define PCI_ANY_ID (~0)
+#define IORESOURCE_MEM 2
+#define PCI_DMA_FROMDEVICE 0
+#define PCI_DMA_TODEVICE 0
+
+#define request_mem_region(addr, size, name) ((void *)1)
+#define release_mem_region(addr, size)
+#define del_timer_sync(timer) del_timer(timer)
+
+static inline void *pci_alloc_consistent(struct pci_dev *hwdev, size_t size,
+ dma_addr_t *dma_handle)
+{
+ void *virt_ptr;
+
+ virt_ptr = kmalloc(size, GFP_KERNEL);
+ *dma_handle = virt_to_bus(virt_ptr);
+ return virt_ptr;
+}
+#define pci_free_consistent(cookie, size, ptr, dma_ptr) kfree(ptr)
+#define pci_map_single(cookie, address, size, dir) virt_to_bus(address)
+#define pci_unmap_single(cookie, address, size, dir)
+#define pci_dma_sync_single(cookie, address, size, dir)
+#undef pci_resource_flags
+#define pci_resource_flags(dev, i) \
+ ((dev->base_address[i] & IORESOURCE_IO) ? IORESOURCE_IO : IORESOURCE_MEM)
+
+void * pci_get_drvdata (struct pci_dev *dev)
+{
+ int i;
+
+ for (i = 0; i < PCI_MAX_MAPPINGS; i++)
+ if (drvmap[i].dev == dev)
+ return drvmap[i].driver_data;
+
+ return NULL;
+}
+
+void pci_set_drvdata (struct pci_dev *dev, void *driver_data)
+{
+ int i;
+
+ for (i = 0; i < PCI_MAX_MAPPINGS; i++)
+ if (drvmap[i].dev == dev) {
+ drvmap[i].driver_data = driver_data;
+ return;
+ }
+}
+
+const struct pci_device_id *
+pci_compat_match_device(const struct pci_device_id *ids, struct pci_dev *dev)
+{
+ u16 subsystem_vendor, subsystem_device;
+
+ pci_read_config_word(dev, PCI_SUBSYSTEM_VENDOR_ID, &subsystem_vendor);
+ pci_read_config_word(dev, PCI_SUBSYSTEM_ID, &subsystem_device);
+
+ while (ids->vendor || ids->subvendor || ids->class_mask) {
+ if ((ids->vendor == PCI_ANY_ID || ids->vendor == dev->vendor) &&
+ (ids->device == PCI_ANY_ID || ids->device == dev->device) &&
+ (ids->subvendor == PCI_ANY_ID || ids->subvendor == subsystem_vendor) &&
+ (ids->subdevice == PCI_ANY_ID || ids->subdevice == subsystem_device) &&
+ !((ids->class ^ dev->class) & ids->class_mask))
+ return ids;
+ ids++;
+ }
+ return NULL;
+}
+
+static int
+pci_announce_device(struct pci_driver *drv, struct pci_dev *dev)
+{
+ const struct pci_device_id *id;
+ int found, i;
+
+ if (drv->id_table) {
+ id = pci_compat_match_device(drv->id_table, dev);
+ if (!id)
+ return 0;
+ } else
+ id = NULL;
+
+ found = 0;
+ for (i = 0; i < PCI_MAX_MAPPINGS; i++)
+ if (!drvmap[i].dev) {
+ drvmap[i].dev = dev;
+ drvmap[i].drv = drv;
+ found = 1;
+ break;
+ }
+
+ if (!found)
+ return 0;
+
+ if (drv->probe(dev, id) >= 0)
+ return 1;
+
+ /* clean up */
+ drvmap[i].dev = NULL;
+ return 0;
+}
+
+int
+pci_register_driver(struct pci_driver *drv)
+{
+ struct pci_dev *dev;
+ int count = 0, found, i;
+#ifdef CONFIG_PCI
+ list_add_tail(&drv->node, &pci_drivers);
+ for (dev = pci_devices; dev; dev = dev->next) {
+ found = 0;
+ for (i = 0; i < PCI_MAX_MAPPINGS && !found; i++)
+ if (drvmap[i].dev == dev)
+ found = 1;
+ if (!found)
+ count += pci_announce_device(drv, dev);
+ }
+#endif
+ return count;
+}
+
+void
+pci_unregister_driver(struct pci_driver *drv)
+{
+ struct pci_dev *dev;
+ int i, found;
+#ifdef CONFIG_PCI
+ list_del(&drv->node);
+ for (dev = pci_devices; dev; dev = dev->next) {
+ found = 0;
+ for (i = 0; i < PCI_MAX_MAPPINGS; i++)
+ if (drvmap[i].dev == dev) {
+ found = 1;
+ break;
+ }
+ if (found) {
+ if (drv->remove)
+ drv->remove(dev);
+ drvmap[i].dev = NULL;
+ }
+ }
+#endif
+}
+
+void *compat_request_region (unsigned long start, unsigned long n, const char *name)
+{
+ if (check_region (start, n) != 0)
+ return NULL;
+ request_region (start, n, name);
+ return (void *) 1;
+}
+
+static inline int pci_module_init(struct pci_driver *drv)
+{
+ if (pci_register_driver(drv))
+ return 0;
+ return -ENODEV;
+}
+
+static struct pci_driver starfire_driver;
+
+int __init starfire_probe(struct net_device *dev)
+{
+ static int __initdata probed = 0;
+
+ if (probed)
+ return -ENODEV;
+ probed++;
+
+ return pci_module_init(&starfire_driver);
+}
+
+#define init_tx_timer(dev, func, timeout)
+#define kick_tx_timer(dev, func, timeout) \
+ if (netif_queue_stopped(dev)) { \
+ /* If this happens network layer tells us we're broken. */ \
+ if (jiffies - dev->trans_start > timeout) \
+ func(dev); \
+ }
+
+#else /* LINUX_VERSION_CODE > 0x20300 */
+
+#define COMPAT_MOD_INC_USE_COUNT
+#define COMPAT_MOD_DEC_USE_COUNT
+
+#define init_tx_timer(dev, func, timeout) \
+ dev->tx_timeout = func; \
+ dev->watchdog_timeo = timeout;
+#define kick_tx_timer(dev, func, timeout)
+
+
+#endif /* LINUX_VERSION_CODE > 0x20300 */
+/* end of compatibility code */
+
+
+enum chip_capability_flags {CanHaveMII=1, };
+#define PCI_IOTYPE (PCI_USES_MASTER | PCI_USES_MEM | PCI_ADDR0)
+#define MEM_ADDR_SZ 0x80000 /* And maps in 0.5MB(!). */
+
+#if 0
+#define ADDR_64BITS 1 /* This chip uses 64 bit addresses. */
+#endif
+
+#define HAS_IP_COPYSUM 1
+
+enum chipset {
+ CH_6915 = 0,
+};
+
+static struct pci_device_id starfire_pci_tbl[] __devinitdata = {
+ { 0x9004, 0x6915, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_6915 },
+ { 0, }
+};
+MODULE_DEVICE_TABLE(pci, starfire_pci_tbl);
+
+/* A chip capabilities table, matching the CH_xxx entries in xxx_pci_tbl[] above. */
+static struct chip_info {
+ const char *name;
+ int io_size;
+ int drv_flags;
+} netdrv_tbl[] __devinitdata = {
+ { "Adaptec Starfire 6915", MEM_ADDR_SZ, CanHaveMII },
+};
+
+
+/* Offsets to the device registers.
+ Unlike software-only systems, device drivers interact with complex hardware.
+ It's not useful to define symbolic names for every register bit in the
+ device. The name can only partially document the semantics and make
+ the driver longer and more difficult to read.
+ In general, only the important configuration values or bits changed
+ multiple times should be defined symbolically.
+*/
+enum register_offsets {
+ PCIDeviceConfig=0x50040, GenCtrl=0x50070, IntrTimerCtrl=0x50074,
+ IntrClear=0x50080, IntrStatus=0x50084, IntrEnable=0x50088,
+ MIICtrl=0x52000, StationAddr=0x50120, EEPROMCtrl=0x51000,
+ TxDescCtrl=0x50090,
+ TxRingPtr=0x50098, HiPriTxRingPtr=0x50094, /* Low and High priority. */
+ TxRingHiAddr=0x5009C, /* 64 bit address extension. */
+ TxProducerIdx=0x500A0, TxConsumerIdx=0x500A4,
+ TxThreshold=0x500B0,
+ CompletionHiAddr=0x500B4, TxCompletionAddr=0x500B8,
+ RxCompletionAddr=0x500BC, RxCompletionQ2Addr=0x500C0,
+ CompletionQConsumerIdx=0x500C4, RxDMACtrl=0x500D0,
+ RxDescQCtrl=0x500D4, RxDescQHiAddr=0x500DC, RxDescQAddr=0x500E0,
+ RxDescQIdx=0x500E8, RxDMAStatus=0x500F0, RxFilterMode=0x500F4,
+ TxMode=0x55000, TxGfpMem=0x58000, RxGfpMem=0x5a000,
+};
+
+/* Bits in the interrupt status/mask registers. */
+enum intr_status_bits {
+ IntrLinkChange=0xf0000000, IntrStatsMax=0x08000000,
+ IntrAbnormalSummary=0x02000000, IntrGeneralTimer=0x01000000,
+ IntrSoftware=0x800000, IntrRxComplQ1Low=0x400000,
+ IntrTxComplQLow=0x200000, IntrPCI=0x100000,
+ IntrDMAErr=0x080000, IntrTxDataLow=0x040000,
+ IntrRxComplQ2Low=0x020000, IntrRxDescQ1Low=0x010000,
+ IntrNormalSummary=0x8000, IntrTxDone=0x4000,
+ IntrTxDMADone=0x2000, IntrTxEmpty=0x1000,
+ IntrEarlyRxQ2=0x0800, IntrEarlyRxQ1=0x0400,
+ IntrRxQ2Done=0x0200, IntrRxQ1Done=0x0100,
+ IntrRxGFPDead=0x80, IntrRxDescQ2Low=0x40,
+ IntrNoTxCsum=0x20, IntrTxBadID=0x10,
+ IntrHiPriTxBadID=0x08, IntrRxGfp=0x04,
+ IntrTxGfp=0x02, IntrPCIPad=0x01,
+ /* not quite bits */
+ IntrRxDone=IntrRxQ2Done | IntrRxQ1Done,
+ IntrRxEmpty=IntrRxDescQ1Low | IntrRxDescQ2Low,
+};
+
+/* Bits in the RxFilterMode register. */
+enum rx_mode_bits {
+ AcceptBroadcast=0x04, AcceptAllMulticast=0x02, AcceptAll=0x01,
+ AcceptMulticast=0x10, AcceptMyPhys=0xE040,
+};
+
+/* Bits in the TxDescCtrl register. */
+enum tx_ctrl_bits {
+ TxDescSpaceUnlim=0x00, TxDescSpace32=0x10, TxDescSpace64=0x20,
+ TxDescSpace128=0x30, TxDescSpace256=0x40,
+ TxDescType0=0x00, TxDescType1=0x01, TxDescType2=0x02,
+ TxDescType3=0x03, TxDescType4=0x04,
+ TxNoDMACompletion=0x08, TxDescQ64bit=0x80,
+ TxHiPriFIFOThreshShift=24, TxPadLenShift=16,
+ TxDMABurstSizeShift=8,
+};
+
+/* Bits in the RxDescQCtrl register. */
+enum rx_ctrl_bits {
+ RxBufferLenShift=16, RxMinDescrThreshShift=0,
+ RxPrefetchMode=0x8000, Rx2048QEntries=0x4000,
+ RxVariableQ=0x2000, RxDesc64bit=0x1000,
+ RxDescQAddr64bit=0x0100,
+ RxDescSpace4=0x000, RxDescSpace8=0x100,
+ RxDescSpace16=0x200, RxDescSpace32=0x300,
+ RxDescSpace64=0x400, RxDescSpace128=0x500,
+ RxConsumerWrEn=0x80,
+};
+
+/* Bits in the RxCompletionAddr register */
+enum rx_compl_bits {
+ RxComplQAddr64bit=0x80, TxComplProducerWrEn=0x40,
+ RxComplType0=0x00, RxComplType1=0x10,
+ RxComplType2=0x20, RxComplType3=0x30,
+ RxComplThreshShift=0,
+};
+
+/* The Rx and Tx buffer descriptors. */
+struct starfire_rx_desc {
+ u32 rxaddr; /* Optionally 64 bits. */
+};
+enum rx_desc_bits {
+ RxDescValid=1, RxDescEndRing=2,
+};
+
+/* Completion queue entry.
+ You must update the page allocation, init_ring and the shift count in rx()
+ if using a larger format. */
+#ifdef HAS_FIRMWARE
+#define csum_rx_status
+#endif /* HAS_FIRMWARE */
+struct rx_done_desc {
+ u32 status; /* Low 16 bits is length. */
+#ifdef csum_rx_status
+ u32 status2; /* Low 16 bits is csum */
+#endif /* csum_rx_status */
+#ifdef full_rx_status
+ u32 status2;
+ u16 vlanid;
+ u16 csum; /* partial checksum */
+ u32 timestamp;
+#endif /* full_rx_status */
+};
+enum rx_done_bits {
+ RxOK=0x20000000, RxFIFOErr=0x10000000, RxBufQ2=0x08000000,
+};
+
+/* Type 1 Tx descriptor. */
+struct starfire_tx_desc {
+ u32 status; /* Upper bits are status, lower 16 length. */
+ u32 first_addr;
+};
+enum tx_desc_bits {
+ TxDescID=0xB0000000,
+ TxCRCEn=0x01000000, TxDescIntr=0x08000000,
+ TxRingWrap=0x04000000, TxCalTCP=0x02000000,
+};
+struct tx_done_report {
+ u32 status; /* timestamp, index. */
+#if 0
+ u32 intrstatus; /* interrupt status */
+#endif
+};
+
+#define PRIV_ALIGN 15 /* Required alignment mask */
+struct rx_ring_info {
+ struct sk_buff *skb;
+ dma_addr_t mapping;
+};
+struct tx_ring_info {
+ struct sk_buff *skb;
+ dma_addr_t first_mapping;
+};
+
+struct netdev_private {
+ /* Descriptor rings first for alignment. */
+ struct starfire_rx_desc *rx_ring;
+ struct starfire_tx_desc *tx_ring;
+ dma_addr_t rx_ring_dma;
+ dma_addr_t tx_ring_dma;
+ /* The addresses of rx/tx-in-place skbuffs. */
+ struct rx_ring_info rx_info[RX_RING_SIZE];
+ struct tx_ring_info tx_info[TX_RING_SIZE];
+ /* Pointers to completion queues (full pages). I should cache line pad..*/
+ u8 pad0[100];
+ struct rx_done_desc *rx_done_q;
+ dma_addr_t rx_done_q_dma;
+ unsigned int rx_done;
+ struct tx_done_report *tx_done_q;
+ unsigned int tx_done;
+ dma_addr_t tx_done_q_dma;
+ struct net_device_stats stats;
+ struct timer_list timer; /* Media monitoring timer. */
+ struct pci_dev *pci_dev;
+ /* Frequently used values: keep some adjacent for cache effect. */
+ unsigned int cur_rx, dirty_rx; /* Producer/consumer ring indices */
+ unsigned int cur_tx, dirty_tx;
+ unsigned int rx_buf_sz; /* Based on MTU+slack. */
+ unsigned int tx_full:1; /* The Tx queue is full. */
+ /* These values are keep track of the transceiver/media in use. */
+ unsigned int full_duplex:1, /* Full-duplex operation requested. */
+ medialock:1, /* Xcvr set to fixed speed/duplex. */
+ rx_flowctrl:1,
+ tx_flowctrl:1; /* Use 802.3x flow control. */
+ unsigned int default_port:4; /* Last dev->if_port value. */
+ u32 tx_mode;
+ u8 tx_threshold;
+ /* MII transceiver section. */
+ int mii_cnt; /* MII device addresses. */
+ u16 advertising; /* NWay media advertisement */
+ unsigned char phys[2]; /* MII device addresses. */
+};
+
+static int mdio_read(struct net_device *dev, int phy_id, int location);
+static void mdio_write(struct net_device *dev, int phy_id, int location, int value);
+static int netdev_open(struct net_device *dev);
+static void check_duplex(struct net_device *dev, int startup);
+static void netdev_timer(unsigned long data);
+static void tx_timeout(struct net_device *dev);
+static void init_ring(struct net_device *dev);
+static int start_tx(struct sk_buff *skb, struct net_device *dev);
+static void intr_handler(int irq, void *dev_instance, struct pt_regs *regs);
+static void netdev_error(struct net_device *dev, int intr_status);
+static int netdev_rx(struct net_device *dev);
+static void netdev_error(struct net_device *dev, int intr_status);
+static void set_rx_mode(struct net_device *dev);
+static struct net_device_stats *get_stats(struct net_device *dev);
+static int mii_ioctl(struct net_device *dev, struct ifreq *rq, int cmd);
+static int netdev_close(struct net_device *dev);
+
+
+
+static int __devinit starfire_init_one(struct pci_dev *pdev,
+ const struct pci_device_id *ent)
+{
+ struct netdev_private *np;
+ int i, irq, option, chip_idx = ent->driver_data;
+ struct net_device *dev;
+ static int card_idx = -1;
+ static int printed_version = 0;
+ long ioaddr;
+ int drv_flags, io_size;
+ int boguscnt;
+
+ card_idx++;
+ option = card_idx < MAX_UNITS ? options[card_idx] : 0;
+
+ if (!printed_version++)
+ printk(KERN_INFO "%s" KERN_INFO "%s" KERN_INFO "%s",
+ version1, version2, version3);
+
+ if (pci_enable_device (pdev))
+ return -EIO;
+
+ ioaddr = pci_resource_start (pdev, 0);
+ io_size = pci_resource_len (pdev, 0);
+ if (!ioaddr || ((pci_resource_flags (pdev, 0) & IORESOURCE_MEM) == 0)) {
+ printk (KERN_ERR "starfire %d: no PCI MEM resources, aborting\n", card_idx);
+ return -ENODEV;
+ }
+
+ dev = init_etherdev(NULL, sizeof(*np));
+ if (!dev) {
+ printk (KERN_ERR "starfire %d: cannot alloc etherdev, aborting\n", card_idx);
+ return -ENOMEM;
+ }
+ SET_MODULE_OWNER(dev);
+
+ irq = pdev->irq;
+
+ if (request_mem_region (ioaddr, io_size, dev->name) == NULL) {
+ printk (KERN_ERR "starfire %d: resource 0x%x @ 0x%lx busy, aborting\n",
+ card_idx, io_size, ioaddr);
+ goto err_out_free_netdev;
+ }
+
+ ioaddr = (long) ioremap (ioaddr, io_size);
+ if (!ioaddr) {
+ printk (KERN_ERR "starfire %d: cannot remap 0x%x @ 0x%lx, aborting\n",
+ card_idx, io_size, ioaddr);
+ goto err_out_free_res;
+ }
+
+ pci_set_master (pdev);
+
+ printk(KERN_INFO "%s: %s at 0x%lx, ",
+ dev->name, netdrv_tbl[chip_idx].name, ioaddr);
+
+ /* Serial EEPROM reads are hidden by the hardware. */
+ for (i = 0; i < 6; i++)
+ dev->dev_addr[i] = readb(ioaddr + EEPROMCtrl + 20-i);
+ for (i = 0; i < 5; i++)
+ printk("%2.2x:", dev->dev_addr[i]);
+ printk("%2.2x, IRQ %d.\n", dev->dev_addr[i], irq);
+
+#if ! defined(final_version) /* Dump the EEPROM contents during development. */
+ if (debug > 4)
+ for (i = 0; i < 0x20; i++)
+ printk("%2.2x%s",
+ (unsigned int)readb(ioaddr + EEPROMCtrl + i),
+ i % 16 != 15 ? " " : "\n");
+#endif
+
+ /* Issue soft reset */
+ writel(0x8000, ioaddr + TxMode);
+ udelay(1000);
+ writel(0, ioaddr + TxMode);
+
+ /* Reset the chip to erase previous misconfiguration. */
+ writel(1, ioaddr + PCIDeviceConfig);
+ boguscnt = 1000;
+ while (--boguscnt > 0) {
+ udelay(10);
+ if ((readl(ioaddr + PCIDeviceConfig) & 1) == 0)
+ break;
+ }
+ if (boguscnt == 0)
+ printk("%s: chipset reset never completed!\n", dev->name);
+ /* wait a little longer */
+ udelay(1000);
+
+ dev->base_addr = ioaddr;
+ dev->irq = irq;
+
+ np = dev->priv;
+ pci_set_drvdata(pdev, dev);
+
+ np->pci_dev = pdev;
+ drv_flags = netdrv_tbl[chip_idx].drv_flags;
+
+ if (dev->mem_start)
+ option = dev->mem_start;
+
+ /* The lower four bits are the media type. */
+ if (option > 0) {
+ if (option & 0x200)
+ np->full_duplex = 1;
+ np->default_port = option & 15;
+ if (np->default_port)
+ np->medialock = 1;
+ }
+ if (card_idx < MAX_UNITS && full_duplex[card_idx] > 0)
+ np->full_duplex = 1;
+
+ if (np->full_duplex)
+ np->medialock = 1;
+
+ /* The chip-specific entries in the device structure. */
+ dev->open = &netdev_open;
+ dev->hard_start_xmit = &start_tx;
+ init_tx_timer(dev, tx_timeout, TX_TIMEOUT);
+ dev->stop = &netdev_close;
+ dev->get_stats = &get_stats;
+ dev->set_multicast_list = &set_rx_mode;
+ dev->do_ioctl = &mii_ioctl;
+
+ if (mtu)
+ dev->mtu = mtu;
+
+ if (drv_flags & CanHaveMII) {
+ int phy, phy_idx = 0;
+ int mii_status;
+ for (phy = 0; phy < 32 && phy_idx < 4; phy++) {
+ mdio_write(dev, phy, 0, 0x8000);
+ udelay(500);
+ boguscnt = 1000;
+ while (--boguscnt > 0)
+ if ((mdio_read(dev, phy, 0) & 0x8000) == 0)
+ break;
+ if (boguscnt == 0) {
+ printk("%s: PHY reset never completed!\n", dev->name);
+ continue;
+ }
+ mii_status = mdio_read(dev, phy, 1);
+ if (mii_status != 0x0000) {
+ np->phys[phy_idx++] = phy;
+ np->advertising = mdio_read(dev, phy, 4);
+ printk(KERN_INFO "%s: MII PHY found at address %d, status "
+ "0x%4.4x advertising %4.4x.\n",
+ dev->name, phy, mii_status, np->advertising);
+ /* there can be only one PHY on-board */
+ break;
+ }
+ }
+ np->mii_cnt = phy_idx;
+ }
+
+ return 0;
+
+err_out_free_res:
+ release_mem_region (ioaddr, io_size);
+err_out_free_netdev:
+ unregister_netdev (dev);
+ kfree (dev);
+ return -ENODEV;
+}
+
+
+/* Read the MII Management Data I/O (MDIO) interfaces. */
+
+static int mdio_read(struct net_device *dev, int phy_id, int location)
+{
+ long mdio_addr = dev->base_addr + MIICtrl + (phy_id<<7) + (location<<2);
+ int result, boguscnt=1000;
+ /* ??? Should we add a busy-wait here? */
+ do
+ result = readl(mdio_addr);
+ while ((result & 0xC0000000) != 0x80000000 && --boguscnt > 0);
+ if (boguscnt == 0)
+ return 0;
+ if ((result & 0xffff) == 0xffff)
+ return 0;
+ return result & 0xffff;
+}
+
+static void mdio_write(struct net_device *dev, int phy_id, int location, int value)
+{
+ long mdio_addr = dev->base_addr + MIICtrl + (phy_id<<7) + (location<<2);
+ writel(value, mdio_addr);
+ /* The busy-wait will occur before a read. */
+ return;
+}
+
+
+static int netdev_open(struct net_device *dev)
+{
+ struct netdev_private *np = dev->priv;
+ long ioaddr = dev->base_addr;
+ int i, retval;
+
+ /* Do we ever need to reset the chip??? */
+
+ COMPAT_MOD_INC_USE_COUNT;
+
+ retval = request_irq(dev->irq, &intr_handler, SA_SHIRQ, dev->name, dev);
+ if (retval) {
+ COMPAT_MOD_DEC_USE_COUNT;
+ return retval;
+ }
+
+ /* Disable the Rx and Tx, and reset the chip. */
+ writel(0, ioaddr + GenCtrl);
+ writel(1, ioaddr + PCIDeviceConfig);
+ if (debug > 1)
+ printk(KERN_DEBUG "%s: netdev_open() irq %d.\n",
+ dev->name, dev->irq);
+ /* Allocate the various queues, failing gracefully. */
+ if (np->tx_done_q == 0)
+ np->tx_done_q = pci_alloc_consistent(np->pci_dev, PAGE_SIZE, &np->tx_done_q_dma);
+ if (np->rx_done_q == 0)
+ np->rx_done_q = pci_alloc_consistent(np->pci_dev, sizeof(struct rx_done_desc) * DONE_Q_SIZE, &np->rx_done_q_dma);
+ if (np->tx_ring == 0)
+ np->tx_ring = pci_alloc_consistent(np->pci_dev, PAGE_SIZE, &np->tx_ring_dma);
+ if (np->rx_ring == 0)
+ np->rx_ring = pci_alloc_consistent(np->pci_dev, PAGE_SIZE, &np->rx_ring_dma);
+ if (np->tx_done_q == 0 || np->rx_done_q == 0
+ || np->rx_ring == 0 || np->tx_ring == 0) {
+ if (np->tx_done_q)
+ pci_free_consistent(np->pci_dev, PAGE_SIZE,
+ np->tx_done_q, np->tx_done_q_dma);
+ if (np->rx_done_q)
+ pci_free_consistent(np->pci_dev, sizeof(struct rx_done_desc) * DONE_Q_SIZE,
+ np->rx_done_q, np->rx_done_q_dma);
+ if (np->tx_ring)
+ pci_free_consistent(np->pci_dev, PAGE_SIZE,
+ np->tx_ring, np->tx_ring_dma);
+ if (np->rx_ring)
+ pci_free_consistent(np->pci_dev, PAGE_SIZE,
+ np->rx_ring, np->rx_ring_dma);
+ COMPAT_MOD_DEC_USE_COUNT;
+ return -ENOMEM;
+ }
+
+ init_ring(dev);
+ /* Set the size of the Rx buffers. */
+ writel((np->rx_buf_sz << RxBufferLenShift) |
+ (0 << RxMinDescrThreshShift) |
+ RxPrefetchMode | RxVariableQ |
+ RxDescSpace4,
+ ioaddr + RxDescQCtrl);
+
+ /* Set Tx descriptor to type 1 and padding to 0 bytes. */
+ writel((2 << TxHiPriFIFOThreshShift) |
+ (0 << TxPadLenShift) |
+ (4 << TxDMABurstSizeShift) |
+ TxDescSpaceUnlim | TxDescType1,
+ ioaddr + TxDescCtrl);
+
+#if defined(ADDR_64BITS) && defined(__alpha__)
+ /* XXX We really need a 64-bit PCI dma interfaces too... -DaveM */
+ writel(np->rx_ring_dma >> 32, ioaddr + RxDescQHiAddr);
+ writel(np->tx_ring_dma >> 32, ioaddr + TxRingHiAddr);
+#else
+ writel(0, ioaddr + RxDescQHiAddr);
+ writel(0, ioaddr + TxRingHiAddr);
+ writel(0, ioaddr + CompletionHiAddr);
+#endif
+ writel(np->rx_ring_dma, ioaddr + RxDescQAddr);
+ writel(np->tx_ring_dma, ioaddr + TxRingPtr);
+
+ writel(np->tx_done_q_dma, ioaddr + TxCompletionAddr);
+#ifdef full_rx_status
+ writel(np->rx_done_q_dma |
+ RxComplType3 |
+ (0 << RxComplThreshShift),
+ ioaddr + RxCompletionAddr);
+#else /* not full_rx_status */
+#ifdef csum_rx_status
+ writel(np->rx_done_q_dma |
+ RxComplType2 |
+ (0 << RxComplThreshShift),
+ ioaddr + RxCompletionAddr);
+#else /* not csum_rx_status */
+ writel(np->rx_done_q_dma |
+ RxComplType0 |
+ (0 << RxComplThreshShift),
+ ioaddr + RxCompletionAddr);
+#endif /* not csum_rx_status */
+#endif /* not full_rx_status */
+
+ if (debug > 1)
+ printk(KERN_DEBUG "%s: Filling in the station address.\n", dev->name);
+
+ /* Fill both the unused Tx SA register and the Rx perfect filter. */
+ for (i = 0; i < 6; i++)
+ writeb(dev->dev_addr[i], ioaddr + StationAddr + 5-i);
+ for (i = 0; i < 16; i++) {
+ u16 *eaddrs = (u16 *)dev->dev_addr;
+ long setup_frm = ioaddr + 0x56000 + i*16;
+ writew(cpu_to_be16(eaddrs[2]), setup_frm); setup_frm += 4;
+ writew(cpu_to_be16(eaddrs[1]), setup_frm); setup_frm += 4;
+ writew(cpu_to_be16(eaddrs[0]), setup_frm); setup_frm += 8;
+ }
+
+ /* Initialize other registers. */
+ /* Configure the PCI bus bursts and FIFO thresholds. */
+ np->tx_mode = 0; /* Initialized when TxMode set. */
+ np->tx_threshold = 4;
+ writel(np->tx_threshold, ioaddr + TxThreshold);
+ writel(interrupt_mitigation, ioaddr + IntrTimerCtrl);
+
+ if (dev->if_port == 0)
+ dev->if_port = np->default_port;
+
+ netif_start_queue(dev);
+
+ if (debug > 1)
+ printk(KERN_DEBUG "%s: Setting the Rx and Tx modes.\n", dev->name);
+ set_rx_mode(dev);
+
+ np->advertising = mdio_read(dev, np->phys[0], 4);
+ check_duplex(dev, 1);
+
+ /* Set the interrupt mask and enable PCI interrupts. */
+ writel(IntrRxDone | IntrRxEmpty | IntrDMAErr |
+ IntrTxDone | IntrStatsMax | IntrLinkChange |
+ IntrNormalSummary | IntrAbnormalSummary |
+ IntrRxGFPDead | IntrNoTxCsum | IntrTxBadID,
+ ioaddr + IntrEnable);
+ writel(0x00800000 | readl(ioaddr + PCIDeviceConfig),
+ ioaddr + PCIDeviceConfig);
+
+#ifdef HAS_FIRMWARE
+ /* Load Rx/Tx firmware into the frame processors */
+ for (i = 0; i < FIRMWARE_RX_SIZE * 2; i++)
+ writel(cpu_to_le32(firmware_rx[i]), ioaddr + RxGfpMem + i * 4);
+ for (i = 0; i < FIRMWARE_TX_SIZE * 2; i++)
+ writel(cpu_to_le32(firmware_tx[i]), ioaddr + TxGfpMem + i * 4);
+ /* Enable the Rx and Tx units, and the Rx/Tx frame processors. */
+ writel(0x003F, ioaddr + GenCtrl);
+#else /* not HAS_FIRMWARE */
+ /* Enable the Rx and Tx units only. */
+ writel(0x000F, ioaddr + GenCtrl);
+#endif /* not HAS_FIRMWARE */
+
+ if (debug > 2)
+ printk(KERN_DEBUG "%s: Done netdev_open().\n",
+ dev->name);
+
+ /* Set the timer to check for link beat. */
+ init_timer(&np->timer);
+ np->timer.expires = jiffies + 3*HZ;
+ np->timer.data = (unsigned long)dev;
+ np->timer.function = &netdev_timer; /* timer handler */
+ add_timer(&np->timer);
+
+ return 0;
+}
+
+static void check_duplex(struct net_device *dev, int startup)
+{
+ struct netdev_private *np = dev->priv;
+ long ioaddr = dev->base_addr;
+ int new_tx_mode ;
+
+ new_tx_mode = 0x0C04 | (np->tx_flowctrl ? 0x0800:0)
+ | (np->rx_flowctrl ? 0x0400:0);
+ if (np->medialock) {
+ if (np->full_duplex)
+ new_tx_mode |= 2;
+ } else {
+ int mii_reg5 = mdio_read(dev, np->phys[0], 5);
+ int negotiated = mii_reg5 & np->advertising;
+ int duplex = (negotiated & 0x0100) || (negotiated & 0x01C0) == 0x0040;
+ if (duplex)
+ new_tx_mode |= 2;
+ if (np->full_duplex != duplex) {
+ np->full_duplex = duplex;
+ if (debug > 1)
+ printk(KERN_INFO "%s: Setting %s-duplex based on MII #%d"
+ " negotiated capability %4.4x.\n", dev->name,
+ duplex ? "full" : "half", np->phys[0], negotiated);
+ }
+ }
+ if (new_tx_mode != np->tx_mode) {
+ np->tx_mode = new_tx_mode;
+ writel(np->tx_mode | 0x8000, ioaddr + TxMode);
+ writel(np->tx_mode, ioaddr + TxMode);
+ }
+}
+
+static void netdev_timer(unsigned long data)
+{
+ struct net_device *dev = (struct net_device *)data;
+ struct netdev_private *np = dev->priv;
+ long ioaddr = dev->base_addr;
+ int next_tick = 60*HZ; /* Check before driver release. */
+
+ if (debug > 3) {
+ printk(KERN_DEBUG "%s: Media selection timer tick, status %8.8x.\n",
+ dev->name, (int)readl(ioaddr + IntrStatus));
+ }
+ check_duplex(dev, 0);
+#if ! defined(final_version)
+ /* This is often falsely triggered. */
+ if (readl(ioaddr + IntrStatus) & 1) {
+ int new_status = readl(ioaddr + IntrStatus);
+ /* Bogus hardware IRQ: Fake an interrupt handler call. */
+ if (new_status & 1) {
+ printk(KERN_ERR "%s: Interrupt blocked, status %8.8x/%8.8x.\n",
+ dev->name, new_status, (int)readl(ioaddr + IntrStatus));
+ intr_handler(dev->irq, dev, 0);
+ }
+ }
+#endif
+
+ np->timer.expires = jiffies + next_tick;
+ add_timer(&np->timer);
+}
+
+static void tx_timeout(struct net_device *dev)
+{
+ struct netdev_private *np = dev->priv;
+ long ioaddr = dev->base_addr;
+
+ printk(KERN_WARNING "%s: Transmit timed out, status %8.8x,"
+ " resetting...\n", dev->name, (int)readl(ioaddr + IntrStatus));
+
+#ifndef __alpha__
+ {
+ int i;
+ printk(KERN_DEBUG " Rx ring %p: ", np->rx_ring);
+ for (i = 0; i < RX_RING_SIZE; i++)
+ printk(" %8.8x", (unsigned int)le32_to_cpu(np->rx_ring[i].rxaddr));
+ printk("\n"KERN_DEBUG" Tx ring %p: ", np->tx_ring);
+ for (i = 0; i < TX_RING_SIZE; i++)
+ printk(" %4.4x", le32_to_cpu(np->tx_ring[i].status));
+ printk("\n");
+ }
+#endif
+
+ /* Perhaps we should reinitialize the hardware here. */
+ dev->if_port = 0;
+ /* Stop and restart the chip's Tx processes . */
+
+ /* Trigger an immediate transmit demand. */
+
+ dev->trans_start = jiffies;
+ np->stats.tx_errors++;
+ netif_wake_queue(dev);
+}
+
+
+/* Initialize the Rx and Tx rings, along with various 'dev' bits. */
+static void init_ring(struct net_device *dev)
+{
+ struct netdev_private *np = dev->priv;
+ int i;
+
+ np->tx_full = 0;
+ np->cur_rx = np->cur_tx = 0;
+ np->dirty_rx = np->rx_done = np->dirty_tx = np->tx_done = 0;
+
+ np->rx_buf_sz = (dev->mtu <= 1500 ? PKT_BUF_SZ : dev->mtu + 32);
+
+ /* Fill in the Rx buffers. Handle allocation failure gracefully. */
+ for (i = 0; i < RX_RING_SIZE; i++) {
+ struct sk_buff *skb = dev_alloc_skb(np->rx_buf_sz);
+ np->rx_info[i].skb = skb;
+ if (skb == NULL)
+ break;
+ np->rx_info[i].mapping = pci_map_single(np->pci_dev, skb->tail, np->rx_buf_sz, PCI_DMA_FROMDEVICE);
+ skb->dev = dev; /* Mark as being used by this device. */
+ /* Grrr, we cannot offset to correctly align the IP header. */
+ np->rx_ring[i].rxaddr = cpu_to_le32(np->rx_info[i].mapping | RxDescValid);
+ }
+ writew(i - 1, dev->base_addr + RxDescQIdx);
+ np->dirty_rx = (unsigned int)(i - RX_RING_SIZE);
+
+ /* Clear the remainder of the Rx buffer ring. */
+ for ( ; i < RX_RING_SIZE; i++) {
+ np->rx_ring[i].rxaddr = 0;
+ np->rx_info[i].skb = NULL;
+ np->rx_info[i].mapping = 0;
+ }
+ /* Mark the last entry as wrapping the ring. */
+ np->rx_ring[i-1].rxaddr |= cpu_to_le32(RxDescEndRing);
+
+ /* Clear the completion rings. */
+ for (i = 0; i < DONE_Q_SIZE; i++) {
+ np->rx_done_q[i].status = 0;
+ np->tx_done_q[i].status = 0;
+ }
+
+ for (i = 0; i < TX_RING_SIZE; i++) {
+ np->tx_info[i].skb = NULL;
+ np->tx_info[i].first_mapping = 0;
+ np->tx_ring[i].status = 0;
+ }
+ return;
+}
+
+static int start_tx(struct sk_buff *skb, struct net_device *dev)
+{
+ struct netdev_private *np = dev->priv;
+ unsigned int entry;
+
+ kick_tx_timer(dev, tx_timeout, TX_TIMEOUT);
+
+ /* Caution: the write order is important here, set the field
+ with the "ownership" bits last. */
+
+ /* Calculate the next Tx descriptor entry. */
+ entry = np->cur_tx % TX_RING_SIZE;
+
+ np->tx_info[entry].skb = skb;
+ np->tx_info[entry].first_mapping =
+ pci_map_single(np->pci_dev, skb->data, skb_first_frag_len(skb), PCI_DMA_TODEVICE);
+
+ np->tx_ring[entry].first_addr = cpu_to_le32(np->tx_info[entry].first_mapping);
+ /* Add "| TxDescIntr" to generate Tx-done interrupts. */
+ np->tx_ring[entry].status = cpu_to_le32(skb->len | TxDescID | TxCRCEn | 1 << 16);
+
+ if (entry >= TX_RING_SIZE-1) /* Wrap ring */
+ np->tx_ring[entry].status |= cpu_to_le32(TxRingWrap | TxDescIntr);
+
+ if (debug > 5) {
+ printk(KERN_DEBUG "%s: Tx #%d slot %d status %8.8x.\n",
+ dev->name, np->cur_tx, entry,
+ le32_to_cpu(np->tx_ring[entry].status));
+ }
+
+ np->cur_tx++;
+
+ if (entry >= TX_RING_SIZE-1) /* Wrap ring */
+ entry = -1;
+ entry++;
+
+ /* Non-x86: explicitly flush descriptor cache lines here. */
+ /* Ensure everything is written back above before the transmit is
+ initiated. - Jes */
+ wmb();
+
+ /* Update the producer index. */
+ writel(entry * (sizeof(struct starfire_tx_desc) / 8), dev->base_addr + TxProducerIdx);
+
+ if (np->cur_tx - np->dirty_tx >= TX_RING_SIZE - 1) {
+ np->tx_full = 1;
+ netif_stop_queue(dev);
+ }
+
+ dev->trans_start = jiffies;
+
+ return 0;
+}
+
+/* The interrupt handler does all of the Rx thread work and cleans up
+ after the Tx thread. */
+static void intr_handler(int irq, void *dev_instance, struct pt_regs *rgs)
+{
+ struct net_device *dev = (struct net_device *)dev_instance;
+ struct netdev_private *np;
+ long ioaddr;
+ int boguscnt = max_interrupt_work;
+ int consumer;
+ int tx_status;
+
+#ifndef final_version /* Can never occur. */
+ if (dev == NULL) {
+ printk (KERN_ERR "Netdev interrupt handler(): IRQ %d for unknown device.\n", irq);
+ return;
+ }
+#endif
+
+ ioaddr = dev->base_addr;
+ np = dev->priv;
+
+ do {
+ u32 intr_status = readl(ioaddr + IntrClear);
+
+ if (debug > 4)
+ printk(KERN_DEBUG "%s: Interrupt status %4.4x.\n",
+ dev->name, intr_status);
+
+ if (intr_status == 0)
+ break;
+
+ if (intr_status & IntrRxDone)
+ netdev_rx(dev);
+
+ /* Scavenge the skbuff list based on the Tx-done queue.
+ There are redundant checks here that may be cleaned up
+ after the driver has proven to be reliable. */
+ consumer = readl(ioaddr + TxConsumerIdx);
+ if (debug > 4)
+ printk(KERN_DEBUG "%s: Tx Consumer index is %d.\n",
+ dev->name, consumer);
+#if 0
+ if (np->tx_done >= 250 || np->tx_done == 0)
+ printk(KERN_DEBUG "%s: Tx completion entry %d is %8.8x, %d is %8.8x.\n",
+ dev->name, np->tx_done,
+ le32_to_cpu(np->tx_done_q[np->tx_done].status),
+ (np->tx_done+1) & (DONE_Q_SIZE-1),
+ le32_to_cpu(np->tx_done_q[(np->tx_done+1)&(DONE_Q_SIZE-1)].status));
+#endif
+
+ while ((tx_status = le32_to_cpu(np->tx_done_q[np->tx_done].status)) != 0) {
+ if (debug > 4)
+ printk(KERN_DEBUG "%s: Tx completion entry %d is %8.8x.\n",
+ dev->name, np->tx_done, tx_status);
+ if ((tx_status & 0xe0000000) == 0xa0000000) {
+ np->stats.tx_packets++;
+ } else if ((tx_status & 0xe0000000) == 0x80000000) {
+ struct sk_buff *skb;
+ u16 entry = tx_status; /* Implicit truncate */
+ entry /= sizeof(struct starfire_tx_desc);
+
+ skb = np->tx_info[entry].skb;
+ np->tx_info[entry].skb = NULL;
+ pci_unmap_single(np->pci_dev,
+ np->tx_info[entry].first_mapping,
+ skb_first_frag_len(skb),
+ PCI_DMA_TODEVICE);
+ np->tx_info[entry].first_mapping = 0;
+
+ /* Scavenge the descriptor. */
+ dev_kfree_skb_irq(skb);
+
+ np->dirty_tx++;
+ }
+ np->tx_done_q[np->tx_done].status = 0;
+ np->tx_done = (np->tx_done+1) & (DONE_Q_SIZE-1);
+ }
+ writew(np->tx_done, ioaddr + CompletionQConsumerIdx + 2);
+
+ if (np->tx_full && np->cur_tx - np->dirty_tx < TX_RING_SIZE - 4) {
+ /* The ring is no longer full, wake the queue. */
+ np->tx_full = 0;
+ netif_wake_queue(dev);
+ }
+
+ /* Abnormal error summary/uncommon events handlers. */
+ if (intr_status & IntrAbnormalSummary)
+ netdev_error(dev, intr_status);
+
+ if (--boguscnt < 0) {
+ printk(KERN_WARNING "%s: Too much work at interrupt, "
+ "status=0x%4.4x.\n",
+ dev->name, intr_status);
+ break;
+ }
+ } while (1);
+
+ if (debug > 4)
+ printk(KERN_DEBUG "%s: exiting interrupt, status=%#4.4x.\n",
+ dev->name, (int)readl(ioaddr + IntrStatus));
+
+#ifndef final_version
+ /* Code that should never be run! Remove after testing.. */
+ {
+ static int stopit = 10;
+ if (!netif_running(dev) && --stopit < 0) {
+ printk(KERN_ERR "%s: Emergency stop, looping startup interrupt.\n",
+ dev->name);
+ free_irq(irq, dev);
+ }
+ }
+#endif
+}
+
+/* This routine is logically part of the interrupt handler, but separated
+ for clarity and better register allocation. */
+static int netdev_rx(struct net_device *dev)
+{
+ struct netdev_private *np = dev->priv;
+ int boguscnt = np->dirty_rx + RX_RING_SIZE - np->cur_rx;
+ u32 desc_status;
+
+ if (np->rx_done_q == 0) {
+ printk(KERN_ERR "%s: rx_done_q is NULL! rx_done is %d. %p.\n",
+ dev->name, np->rx_done, np->tx_done_q);
+ return 0;
+ }
+
+ /* If EOP is set on the next entry, it's a new packet. Send it up. */
+ while ((desc_status = le32_to_cpu(np->rx_done_q[np->rx_done].status)) != 0) {
+ struct sk_buff *skb;
+ u16 pkt_len;
+ int entry;
+
+ if (debug > 4)
+ printk(KERN_DEBUG " netdev_rx() status of %d was %8.8x.\n", np->rx_done, desc_status);
+ if (--boguscnt < 0)
+ break;
+ if ( ! (desc_status & RxOK)) {
+ /* There was a error. */
+ if (debug > 2)
+ printk(KERN_DEBUG " netdev_rx() Rx error was %8.8x.\n", desc_status);
+ np->stats.rx_errors++;
+ if (desc_status & RxFIFOErr)
+ np->stats.rx_fifo_errors++;
+ goto next_rx;
+ }
+
+ pkt_len = desc_status; /* Implicitly Truncate */
+ entry = (desc_status >> 16) & 0x7ff;
+
+#ifndef final_version
+ if (debug > 4)
+ printk(KERN_DEBUG " netdev_rx() normal Rx pkt length %d, bogus_cnt %d.\n", pkt_len, boguscnt);
+#endif
+ /* Check if the packet is long enough to accept without copying
+ to a minimally-sized skbuff. */
+ if (pkt_len < rx_copybreak
+ && (skb = dev_alloc_skb(pkt_len + 2)) != NULL) {
+ skb->dev = dev;
+ skb_reserve(skb, 2); /* 16 byte align the IP header */
+ pci_dma_sync_single(np->pci_dev,
+ np->rx_info[entry].mapping,
+ pkt_len, PCI_DMA_FROMDEVICE);
+#if HAS_IP_COPYSUM /* Call copy + cksum if available. */
+ eth_copy_and_sum(skb, np->rx_info[entry].skb->tail, pkt_len, 0);
+ skb_put(skb, pkt_len);
+#else
+ memcpy(skb_put(skb, pkt_len), np->rx_info[entry].skb->tail, pkt_len);
+#endif
+ } else {
+ char *temp;
+
+ pci_unmap_single(np->pci_dev, np->rx_info[entry].mapping, np->rx_buf_sz, PCI_DMA_FROMDEVICE);
+ skb = np->rx_info[entry].skb;
+ temp = skb_put(skb, pkt_len);
+ np->rx_info[entry].skb = NULL;
+ np->rx_info[entry].mapping = 0;
+ }
+#ifndef final_version /* Remove after testing. */
+ /* You will want this info for the initial debug. */
+ if (debug > 5)
+ printk(KERN_DEBUG " Rx data %2.2x:%2.2x:%2.2x:%2.2x:%2.2x:"
+ "%2.2x %2.2x:%2.2x:%2.2x:%2.2x:%2.2x:%2.2x %2.2x%2.2x "
+ "%d.%d.%d.%d.\n",
+ skb->data[0], skb->data[1], skb->data[2], skb->data[3],
+ skb->data[4], skb->data[5], skb->data[6], skb->data[7],
+ skb->data[8], skb->data[9], skb->data[10],
+ skb->data[11], skb->data[12], skb->data[13],
+ skb->data[14], skb->data[15], skb->data[16],
+ skb->data[17]);
+#endif
+ skb->protocol = eth_type_trans(skb, dev);
+#if defined(full_rx_status) || defined(csum_rx_status)
+ if (le32_to_cpu(np->rx_done_q[np->rx_done].status2) & 0x01000000) {
+ skb->ip_summed = CHECKSUM_UNNECESSARY;
+ }
+ /*
+ * This feature doesn't seem to be working, at least
+ * with the two firmware versions I have. If the GFP sees
+ * a fragment, it either ignores it completely, or reports
+ * "bad checksum" on it.
+ *
+ * Maybe I missed something -- corrections are welcome.
+ * Until then, the printk stays. :-) -Ion
+ */
+ else if (le32_to_cpu(np->rx_done_q[np->rx_done].status2) & 0x00400000) {
+ skb->ip_summed = CHECKSUM_HW;
+ skb->csum = le32_to_cpu(np->rx_done_q[np->rx_done].status2) & 0xffff;
+ printk(KERN_DEBUG "%s: checksum_hw, status2 = %x\n", dev->name, np->rx_done_q[np->rx_done].status2);
+ }
+#endif
+ netif_rx(skb);
+ dev->last_rx = jiffies;
+ np->stats.rx_packets++;
+
+next_rx:
+ np->cur_rx++;
+ np->rx_done_q[np->rx_done].status = 0;
+ np->rx_done = (np->rx_done + 1) & (DONE_Q_SIZE-1);
+ }
+ writew(np->rx_done, dev->base_addr + CompletionQConsumerIdx);
+
+ /* Refill the Rx ring buffers. */
+ for (; np->cur_rx - np->dirty_rx > 0; np->dirty_rx++) {
+ struct sk_buff *skb;
+ int entry = np->dirty_rx % RX_RING_SIZE;
+ if (np->rx_info[entry].skb == NULL) {
+ skb = dev_alloc_skb(np->rx_buf_sz);
+ np->rx_info[entry].skb = skb;
+ if (skb == NULL)
+ break; /* Better luck next round. */
+ np->rx_info[entry].mapping =
+ pci_map_single(np->pci_dev, skb->tail, np->rx_buf_sz, PCI_DMA_FROMDEVICE);
+ skb->dev = dev; /* Mark as being used by this device. */
+ np->rx_ring[entry].rxaddr =
+ cpu_to_le32(np->rx_info[entry].mapping | RxDescValid);
+ }
+ if (entry == RX_RING_SIZE - 1)
+ np->rx_ring[entry].rxaddr |= cpu_to_le32(RxDescEndRing);
+ /* We could defer this until later... */
+ writew(entry, dev->base_addr + RxDescQIdx);
+ }
+
+ if (debug > 5
+ || memcmp(np->pad0, np->pad0 + 1, sizeof(np->pad0) -1))
+ printk(KERN_DEBUG " exiting netdev_rx() status of %d was %8.8x %d.\n",
+ np->rx_done, desc_status,
+ memcmp(np->pad0, np->pad0 + 1, sizeof(np->pad0) -1));
+
+ /* Restart Rx engine if stopped. */
+ return 0;
+}
+
+static void netdev_error(struct net_device *dev, int intr_status)
+{
+ struct netdev_private *np = dev->priv;
+
+ if (intr_status & IntrLinkChange) {
+ printk(KERN_NOTICE "%s: Link changed: Autonegotiation advertising"
+ " %4.4x, partner %4.4x.\n", dev->name,
+ mdio_read(dev, np->phys[0], 4),
+ mdio_read(dev, np->phys[0], 5));
+ check_duplex(dev, 0);
+ }
+ if (intr_status & IntrStatsMax) {
+ get_stats(dev);
+ }
+ /* Came close to underrunning the Tx FIFO, increase threshold. */
+ if (intr_status & IntrTxDataLow)
+ writel(++np->tx_threshold, dev->base_addr + TxThreshold);
+ if ((intr_status & ~(IntrAbnormalSummary|IntrLinkChange|IntrStatsMax|IntrTxDataLow|1)) && debug)
+ printk(KERN_ERR "%s: Something Wicked happened! %4.4x.\n",
+ dev->name, intr_status);
+ /* Hmmmmm, it's not clear how to recover from DMA faults. */
+ if (intr_status & IntrDMAErr)
+ np->stats.tx_fifo_errors++;
+}
+
+static struct net_device_stats *get_stats(struct net_device *dev)
+{
+ long ioaddr = dev->base_addr;
+ struct netdev_private *np = dev->priv;
+
+ /* This adapter architecture needs no SMP locks. */
+ np->stats.tx_bytes = readl(ioaddr + 0x57010);
+ np->stats.rx_bytes = readl(ioaddr + 0x57044);
+ np->stats.tx_packets = readl(ioaddr + 0x57000);
+ np->stats.tx_aborted_errors =
+ readl(ioaddr + 0x57024) + readl(ioaddr + 0x57028);
+ np->stats.tx_window_errors = readl(ioaddr + 0x57018);
+ np->stats.collisions =
+ readl(ioaddr + 0x57004) + readl(ioaddr + 0x57008);
+
+ /* The chip only need report frame silently dropped. */
+ np->stats.rx_dropped += readw(ioaddr + RxDMAStatus);
+ writew(0, ioaddr + RxDMAStatus);
+ np->stats.rx_crc_errors = readl(ioaddr + 0x5703C);
+ np->stats.rx_frame_errors = readl(ioaddr + 0x57040);
+ np->stats.rx_length_errors = readl(ioaddr + 0x57058);
+ np->stats.rx_missed_errors = readl(ioaddr + 0x5707C);
+
+ return &np->stats;
+}
+
+/* The little-endian AUTODIN II ethernet CRC calculations.
+ A big-endian version is also available.
+ This is slow but compact code. Do not use this routine for bulk data,
+ use a table-based routine instead.
+ This is common code and should be moved to net/core/crc.c.
+ Chips may use the upper or lower CRC bits, and may reverse and/or invert
+ them. Select the endian-ness that results in minimal calculations.
+*/
+static unsigned const ethernet_polynomial_le = 0xedb88320U;
+static inline unsigned ether_crc_le(int length, unsigned char *data)
+{
+ unsigned int crc = 0xffffffff; /* Initial value. */
+ while(--length >= 0) {
+ unsigned char current_octet = *data++;
+ int bit;
+ for (bit = 8; --bit >= 0; current_octet >>= 1) {
+ if ((crc ^ current_octet) & 1) {
+ crc >>= 1;
+ crc ^= ethernet_polynomial_le;
+ } else
+ crc >>= 1;
+ }
+ }
+ return crc;
+}
+
+static void set_rx_mode(struct net_device *dev)
+{
+ long ioaddr = dev->base_addr;
+ u32 rx_mode;
+ struct dev_mc_list *mclist;
+ int i;
+
+ if (dev->flags & IFF_PROMISC) { /* Set promiscuous. */
+ /* Unconditionally log net taps. */
+ printk(KERN_NOTICE "%s: Promiscuous mode enabled.\n", dev->name);
+ rx_mode = AcceptBroadcast|AcceptAllMulticast|AcceptAll|AcceptMyPhys;
+ } else if ((dev->mc_count > multicast_filter_limit)
+ || (dev->flags & IFF_ALLMULTI)) {
+ /* Too many to match, or accept all multicasts. */
+ rx_mode = AcceptBroadcast|AcceptAllMulticast|AcceptMyPhys;
+ } else if (dev->mc_count <= 15) {
+ /* Use the 16 element perfect filter. */
+ long filter_addr = ioaddr + 0x56000 + 1*16;
+ for (i = 1, mclist = dev->mc_list; mclist && i <= dev->mc_count;
+ i++, mclist = mclist->next) {
+ u16 *eaddrs = (u16 *)mclist->dmi_addr;
+ writew(cpu_to_be16(eaddrs[2]), filter_addr); filter_addr += 4;
+ writew(cpu_to_be16(eaddrs[1]), filter_addr); filter_addr += 4;
+ writew(cpu_to_be16(eaddrs[0]), filter_addr); filter_addr += 8;
+ }
+ while (i++ < 16) {
+ writew(0xffff, filter_addr); filter_addr += 4;
+ writew(0xffff, filter_addr); filter_addr += 4;
+ writew(0xffff, filter_addr); filter_addr += 8;
+ }
+ rx_mode = AcceptBroadcast | AcceptMyPhys;
+ } else {
+ /* Must use a multicast hash table. */
+ long filter_addr;
+ u16 mc_filter[32] __attribute__ ((aligned(sizeof(long)))); /* Multicast hash filter */
+
+ memset(mc_filter, 0, sizeof(mc_filter));
+ for (i = 0, mclist = dev->mc_list; mclist && i < dev->mc_count;
+ i++, mclist = mclist->next) {
+ set_bit(ether_crc_le(ETH_ALEN, mclist->dmi_addr) >> 23, mc_filter);
+ }
+ /* Clear the perfect filter list. */
+ filter_addr = ioaddr + 0x56000 + 1*16;
+ for (i = 1; i < 16; i++) {
+ writew(0xffff, filter_addr); filter_addr += 4;
+ writew(0xffff, filter_addr); filter_addr += 4;
+ writew(0xffff, filter_addr); filter_addr += 8;
+ }
+ for (filter_addr=ioaddr + 0x56100, i=0; i < 32; filter_addr+= 16, i++)
+ writew(mc_filter[i], filter_addr);
+ rx_mode = AcceptBroadcast | AcceptMulticast | AcceptMyPhys;
+ }
+ writel(rx_mode, ioaddr + RxFilterMode);
+}
+
+static int mii_ioctl(struct net_device *dev, struct ifreq *rq, int cmd)
+{
+ struct netdev_private *np = dev->priv;
+ u16 *data = (u16 *)&rq->ifr_data;
+
+ switch(cmd) {
+ case SIOCDEVPRIVATE: /* Get the address of the PHY in use. */
+ data[0] = np->phys[0] & 0x1f;
+ /* Fall Through */
+ case SIOCDEVPRIVATE+1: /* Read the specified MII register. */
+ data[3] = mdio_read(dev, data[0] & 0x1f, data[1] & 0x1f);
+ return 0;
+ case SIOCDEVPRIVATE+2: /* Write the specified MII register */
+ if (!capable(CAP_NET_ADMIN))
+ return -EPERM;
+ if (data[0] == np->phys[0]) {
+ u16 value = data[2];
+ switch (data[1]) {
+ case 0:
+ if (value & 0x9000) /* Autonegotiation. */
+ np->medialock = 0;
+ else {
+ np->full_duplex = (value & 0x0100) ? 1 : 0;
+ np->medialock = 1;
+ }
+ break;
+ case 4: np->advertising = value; break;
+ }
+ check_duplex(dev, 0);
+ }
+ mdio_write(dev, data[0] & 0x1f, data[1] & 0x1f, data[2]);
+ return 0;
+ default:
+ return -EOPNOTSUPP;
+ }
+}
+
+static int netdev_close(struct net_device *dev)
+{
+ long ioaddr = dev->base_addr;
+ struct netdev_private *np = dev->priv;
+ int i;
+
+ netif_device_detach(dev);
+
+ del_timer_sync(&np->timer);
+
+ if (debug > 1) {
+ printk(KERN_DEBUG "%s: Shutting down ethercard, Intr status %4.4x.\n",
+ dev->name, (int)readl(ioaddr + IntrStatus));
+ printk(KERN_DEBUG "%s: Queue pointers were Tx %d / %d, Rx %d / %d.\n",
+ dev->name, np->cur_tx, np->dirty_tx, np->cur_rx, np->dirty_rx);
+ }
+
+ /* Disable interrupts by clearing the interrupt mask. */
+ writel(0, ioaddr + IntrEnable);
+
+ /* Stop the chip's Tx and Rx processes. */
+
+#ifdef __i386__
+ if (debug > 2) {
+ printk("\n"KERN_DEBUG" Tx ring at %8.8x:\n",
+ np->tx_ring_dma);
+ for (i = 0; i < 8 /* TX_RING_SIZE is huge! */; i++)
+ printk(KERN_DEBUG " #%d desc. %8.8x %8.8x -> %8.8x.\n",
+ i, le32_to_cpu(np->tx_ring[i].status),
+ le32_to_cpu(np->tx_ring[i].first_addr),
+ le32_to_cpu(np->tx_done_q[i].status));
+ printk(KERN_DEBUG " Rx ring at %8.8x -> %p:\n",
+ np->rx_ring_dma, np->rx_done_q);
+ if (np->rx_done_q)
+ for (i = 0; i < 8 /* RX_RING_SIZE */; i++) {
+ printk(KERN_DEBUG " #%d desc. %8.8x -> %8.8x\n",
+ i, le32_to_cpu(np->rx_ring[i].rxaddr), le32_to_cpu(np->rx_done_q[i].status));
+ }
+ }
+#endif /* __i386__ debugging only */
+
+ free_irq(dev->irq, dev);
+
+ /* Free all the skbuffs in the Rx queue. */
+ for (i = 0; i < RX_RING_SIZE; i++) {
+ np->rx_ring[i].rxaddr = cpu_to_le32(0xBADF00D0); /* An invalid address. */
+ if (np->rx_info[i].skb != NULL) {
+ pci_unmap_single(np->pci_dev, np->rx_info[i].mapping, np->rx_buf_sz, PCI_DMA_FROMDEVICE);
+ dev_kfree_skb(np->rx_info[i].skb);
+ }
+ np->rx_info[i].skb = NULL;
+ np->rx_info[i].mapping = 0;
+ }
+ for (i = 0; i < TX_RING_SIZE; i++) {
+ struct sk_buff *skb = np->tx_info[i].skb;
+ if (skb == NULL)
+ continue;
+ pci_unmap_single(np->pci_dev,
+ np->tx_info[i].first_mapping,
+ skb_first_frag_len(skb), PCI_DMA_TODEVICE);
+ np->tx_info[i].first_mapping = 0;
+ dev_kfree_skb(skb);
+ np->tx_info[i].skb = NULL;
+ }
+
+ COMPAT_MOD_DEC_USE_COUNT;
+
+ return 0;
+}
+
+
+static void __devexit starfire_remove_one (struct pci_dev *pdev)
+{
+ struct net_device *dev = pci_get_drvdata(pdev);
+ struct netdev_private *np;
+
+ if (!dev)
+ BUG();
+
+ np = dev->priv;
+
+ unregister_netdev(dev);
+ iounmap((char *)dev->base_addr);
+
+ release_mem_region(pci_resource_start (pdev, 0),
+ pci_resource_len (pdev, 0));
+
+ if (np->tx_done_q)
+ pci_free_consistent(np->pci_dev, PAGE_SIZE,
+ np->tx_done_q, np->tx_done_q_dma);
+ if (np->rx_done_q)
+ pci_free_consistent(np->pci_dev, PAGE_SIZE,
+ np->rx_done_q, np->rx_done_q_dma);
+ if (np->tx_ring)
+ pci_free_consistent(np->pci_dev, PAGE_SIZE,
+ np->tx_ring, np->tx_ring_dma);
+ if (np->rx_ring)
+ pci_free_consistent(np->pci_dev, PAGE_SIZE,
+ np->rx_ring, np->rx_ring_dma);
+
+ kfree(dev);
+}
+
+
+static struct pci_driver starfire_driver = {
+ name: "starfire",
+ probe: starfire_init_one,
+ remove: starfire_remove_one,
+ id_table: starfire_pci_tbl,
+};
+
+
+static int __init starfire_init (void)
+{
+ return pci_module_init (&starfire_driver);
+}
+
+
+static void __exit starfire_cleanup (void)
+{
+ pci_unregister_driver (&starfire_driver);
+}
+
+
+module_init(starfire_init);
+module_exit(starfire_cleanup);
+
+
+/*
+ * Local variables:
+ * compile-command: "gcc -DMODULE -Wall -Wstrict-prototypes -O6 -c starfire.c"
+ * simple-compile-command: "gcc -DMODULE -O6 -c starfire.c"
+ * c-basic-offset: 8
+ * tab-width: 8
+ * End:
+ */
--- /usr/src/local/linux-2.2.18-vanilla/drivers/net/starfire_firmware.pl Sun Feb 11 15:43:13 2001
+++ linux-2.2.18/drivers/net/starfire_firmware.pl Wed Feb 7 17:59:17 2001
@@ -0,0 +1,31 @@
+#!/usr/bin/perl
+
+# This script can be used to generate a new starfire_firmware.h
+# from GFP_RX.DAT and GFP_TX.DAT, files included with the DDK
+# and also with the Novell drivers.
+
+open FW, "GFP_RX.DAT" || die;
+open FWH, ">starfire_firmware.h" || die;
+
+printf(FWH "static u32 firmware_rx[] = {\n");
+$counter = 0;
+while ($foo = <FW>) {
+ chomp;
+ printf(FWH " 0x%s, 0x0000%s,\n", substr($foo, 4, 8), substr($foo, 0, 4));
+ $counter++;
+}
+
+close FW;
+open FW, "GFP_TX.DAT" || die;
+
+printf(FWH "};\t/* %d Rx instructions */\n#define FIRMWARE_RX_SIZE %d\n\nstatic u32 firmware_tx[] = {\n", $counter, $counter);
+$counter = 0;
+while ($foo = <FW>) {
+ chomp;
+ printf(FWH " 0x%s, 0x0000%s,\n", substr($foo, 4, 8), substr($foo, 0, 4));
+ $counter++;
+}
+
+close FW;
+printf(FWH "};\t/* %d Tx instructions */\n#define FIRMWARE_TX_SIZE %d\n", $counter, $counter);
+close(FWH);

2001-02-12 10:07:11

by Alan

[permalink] [raw]
Subject: Re: [PATCH] new version of the starfire driver for 2.2.19pre

No resolution to firmware fiasco, no driver in kernel

2001-02-12 10:47:34

by Ion Badulescu

[permalink] [raw]
Subject: Re: [PATCH] new version of the starfire driver for 2.2.19pre

On Mon, 12 Feb 2001, Alan Cox wrote:

> No resolution to firmware fiasco, no driver in kernel

But the driver _does_ work without the firmware, it only loses the
hardware TCP checksum on Rx capability. That's what we have in 2.4.x right
now, why should 2.2.x be pickier and *demand* to have the firmware or no
support at all?

Thanks,
Ion

--
It is better to keep your mouth shut and be thought a fool,
than to open it and remove all doubt.

2001-02-12 14:56:21

by Alan

[permalink] [raw]
Subject: Re: [PATCH] new version of the starfire driver for 2.2.19pre

> But the driver _does_ work without the firmware, it only loses the
> hardware TCP checksum on Rx capability. That's what we have in 2.4.x right
> now, why should 2.2.x be pickier and *demand* to have the firmware or no
> support at all?

Ok I didnt realise the firmware thing was tcp checksum paths only. Thats fine

2001-02-12 19:56:44

by Ion Badulescu

[permalink] [raw]
Subject: Re: [PATCH] new version of the starfire driver for 2.2.19pre

On Mon, 12 Feb 2001, Alan Cox wrote:

> Ok I didnt realise the firmware thing was tcp checksum paths only. Thats fine

Thanks.

Here is an incremental patch from the version in 2.2.19pre10 to the latest
version of starfire.c. Please apply, the 2219pre10 version doesn't work if
compiled-in (because drivers/net builds net.a not net.o). It also fixes
the MII interface detection problem mentioned by Don Becker.

The patch is longish, but it's mostly whitespace and moving code around.
It also removes all the code that's #ifdef ZEROCOPY, since Jeff Garzik
doesn't want it in 2.4.x and it definitely can't work in 2.2.x.

Thanks,
Ion

--
It is better to keep your mouth shut and be thought a fool,
than to open it and remove all doubt.
-----------------------
--- /usr/src/local/linux-2.2.19pre10-vanilla/drivers/net/starfire.c Mon Feb 12 11:42:32 2001
+++ linux-2.2.18/drivers/net/starfire.c Sun Feb 11 16:52:50 2001
@@ -20,7 +20,7 @@
-----------------------------------------------------------

Linux kernel-specific changes:
-
+
LK1.1.1 (jgarzik):
- Use PCI driver interface
- Fix MOD_xxx races
@@ -31,7 +31,7 @@

LK1.1.3 (Andrew Morton)
- Timer cleanups
-
+
LK1.1.4 (jgarzik):
- Merge Becker version 1.03

@@ -41,6 +41,17 @@

LK1.2.2 (Ion Badulescu)
- Backported to 2.2.x
+
+ LK1.2.3 (Ion Badulescu)
+ - Fix the flaky mdio interface
+ - More compat clean-ups
+
+ LK1.2.4 (Ion Badulescu)
+ - More 2.2.x initialization fixes
+
+TODO:
+ - implement tx_timeout() properly
+ - support ethtool
*/

/* These identify the driver base version and may not be removed. */
@@ -50,7 +61,7 @@
" Updates and info at http://www.scyld.com/network/starfire.html\n";

static const char version3[] =
-" (unofficial 2.2.x kernel port, version 1.2.2, February 07, 2001)\n";
+" (unofficial 2.2.x kernel port, version 1.2.4, February 11, 2001)\n";

/* The user-configurable values.
These may be modified when a driver module is loaded.*/
@@ -66,8 +77,8 @@
* for this driver to really use the firmware. Note that Rx/Tx
* hardware TCP checksumming is not possible without the firmware.
*
- * I'm currently talking to Adaptec about this redistribution issue.
- * Stay tuned...
+ * I'm currently [Feb 2001] talking to Adaptec about this redistribution
+ * issue. Stay tuned...
*/
#undef HAS_FIRMWARE
/*
@@ -75,10 +86,6 @@
* of length 1. If and when this is fixed, the #define below can be removed.
*/
#define HAS_BROKEN_FIRMWARE
-/*
- * Define this if using the driver with the zero-copy patch
- */
-#undef ZEROCOPY

/* Used for tuning interrupt latency vs. overhead. */
static int interrupt_mitigation = 0x0;
@@ -87,12 +94,27 @@
static int max_interrupt_work = 20;
static int mtu = 0;
/* Maximum number of multicast addresses to filter (vs. rx-all-multicast).
- The Starfire has a 512 element hash table based on the Ethernet CRC. */
+ The Starfire has a 512 element hash table based on the Ethernet CRC. */
static int multicast_filter_limit = 32;

-/* Set the copy breakpoint for the copy-only-tiny-frames scheme.
- Setting to > 1518 effectively disables this feature. */
+#define PKT_BUF_SZ 1536 /* Size of each temporary Rx buffer.*/
+/*
+ * Set the copy breakpoint for the copy-only-tiny-frames scheme.
+ * Setting to > 1518 effectively disables this feature.
+ *
+ * NOTE:
+ * The ia64 doesn't allow for unaligned loads even of integers being
+ * misaligned on a 2 byte boundary. Thus always force copying of
+ * packets as the starfire doesn't allow for misaligned DMAs ;-(
+ * 23/10/2000 - Jes
+ *
+ * Neither does the Alpha. -Ion
+ */
+#if defined(__ia64__) || defined(__alpha__)
+static int rx_copybreak = PKT_BUF_SZ;
+#else
static int rx_copybreak = 0;
+#endif

/* Used to pass the media type, etc.
Both 'options[]' and 'full_duplex[]' exist for driver interoperability.
@@ -116,29 +138,9 @@

/* Operational parameters that usually are not changed. */
/* Time in jiffies before concluding the transmitter is hung. */
-#define TX_TIMEOUT (2*HZ)
-
-#define PKT_BUF_SZ 1536 /* Size of each temporary Rx buffer.*/
-
-/*
- * The ia64 doesn't allow for unaligned loads even of integers being
- * misaligned on a 2 byte boundary. Thus always force copying of
- * packets as the starfire doesn't allow for misaligned DMAs ;-(
- * 23/10/2000 - Jes
- *
- * Neither does the Alpha. -Ion
- */
-#if defined(__ia64__) || defined(__alpha__)
-#define PKT_SHOULD_COPY(pkt_len) 1
-#else
-#define PKT_SHOULD_COPY(pkt_len) (pkt_len < rx_copybreak)
-#endif
+#define TX_TIMEOUT (2*HZ)

-#ifdef ZEROCOPY
-#define skb_first_frag_len(skb) skb_headlen(skb)
-#else /* not ZEROCOPY */
#define skb_first_frag_len(skb) (skb->len)
-#endif /* not ZEROCOPY */

#if !defined(__OPTIMIZE__)
#warning You must compile this file with the correct options!
@@ -146,6 +148,7 @@
#error You must compile this driver with "-O".
#endif

+#include <linux/version.h>
#include <linux/module.h>
#include <linux/kernel.h>
#include <linux/string.h>
@@ -159,58 +162,159 @@
#include <linux/etherdevice.h>
#include <linux/skbuff.h>
#include <linux/init.h>
+#include <linux/delay.h>
#include <asm/processor.h> /* Processor type for cache alignment. */
#include <asm/bitops.h>
#include <asm/io.h>

-#include <linux/version.h>
+#ifdef HAS_FIRMWARE
+#include "starfire_firmware.h"
+#endif /* HAS_FIRMWARE */
+
+MODULE_AUTHOR("Donald Becker <[email protected]>");
+MODULE_DESCRIPTION("Adaptec Starfire Ethernet driver");
+MODULE_PARM(max_interrupt_work, "i");
+MODULE_PARM(mtu, "i");
+MODULE_PARM(debug, "i");
+MODULE_PARM(rx_copybreak, "i");
+MODULE_PARM(interrupt_mitigation, "i");
+MODULE_PARM(options, "1-" __MODULE_STRING(MAX_UNITS) "i");
+MODULE_PARM(full_duplex, "1-" __MODULE_STRING(MAX_UNITS) "i");
+
+/*
+ Theory of Operation
+
+I. Board Compatibility
+
+This driver is for the Adaptec 6915 "Starfire" 64 bit PCI Ethernet adapter.
+
+II. Board-specific settings
+
+III. Driver operation
+
+IIIa. Ring buffers
+
+The Starfire hardware uses multiple fixed-size descriptor queues/rings. The
+ring sizes are set fixed by the hardware, but may optionally be wrapped
+earlier by the END bit in the descriptor.
+This driver uses that hardware queue size for the Rx ring, where a large
+number of entries has no ill effect beyond increases the potential backlog.
+The Tx ring is wrapped with the END bit, since a large hardware Tx queue
+disables the queue layer priority ordering and we have no mechanism to
+utilize the hardware two-level priority queue. When modifying the
+RX/TX_RING_SIZE pay close attention to page sizes and the ring-empty warning
+levels.
+
+IIIb/c. Transmit/Receive Structure
+
+See the Adaptec manual for the many possible structures, and options for
+each structure. There are far too many to document here.
+
+For transmit this driver uses type 0/1 transmit descriptors (depending
+on the presence of the zerocopy patches), and relies on automatic
+minimum-length padding. It does not use the completion queue
+consumer index, but instead checks for non-zero status entries.
+
+For receive this driver uses type 0 receive descriptors. The driver
+allocates full frame size skbuffs for the Rx ring buffers, so all frames
+should fit in a single descriptor. The driver does not use the completion
+queue consumer index, but instead checks for non-zero status entries.
+
+When an incoming frame is less than RX_COPYBREAK bytes long, a fresh skbuff
+is allocated and the frame is copied to the new skbuff. When the incoming
+frame is larger, the skbuff is passed directly up the protocol stack.
+Buffers consumed this way are replaced by newly allocated skbuffs in a later
+phase of receive.
+
+A notable aspect of operation is that unaligned buffers are not permitted by
+the Starfire hardware. The IP header at offset 14 in an ethernet frame thus
+isn't longword aligned, which may cause problems on some machine
+e.g. Alphas and IA64. For these architectures, the driver is forced to copy
+the frame into a new skbuff unconditionally. Copied frames are put into the
+skbuff at an offset of "+2", thus 16-byte aligning the IP header.
+
+IIId. Synchronization
+
+The driver runs as two independent, single-threaded flows of control. One
+is the send-packet routine, which enforces single-threaded use by the
+dev->tbusy flag. The other thread is the interrupt handler, which is single
+threaded by the hardware and interrupt handling software.
+
+The send packet thread has partial control over the Tx ring and 'dev->tbusy'
+flag. It sets the tbusy flag whenever it's queuing a Tx packet. If the next
+queue slot is empty, it clears the tbusy flag when finished otherwise it sets
+the 'lp->tx_full' flag.
+
+The interrupt handler has exclusive control over the Rx ring and records stats
+from the Tx ring. After reaping the stats, it marks the Tx queue entry as
+empty by incrementing the dirty_tx mark. Iff the 'lp->tx_full' flag is set, it
+clears both the tx_full and tbusy flags.
+
+IV. Notes
+
+IVb. References
+
+The Adaptec Starfire manuals, available only from Adaptec.
+http://www.scyld.com/expert/100mbps.html
+http://www.scyld.com/expert/NWay.html
+
+IVc. Errata
+
+*/
+
+
+
+/* 2.2.x compatibility code */
#if LINUX_VERSION_CODE < 0x20300
#include <linux/kcomp.h>

static LIST_HEAD(pci_drivers);

struct pci_driver_mapping {
- struct pci_dev *dev;
- struct pci_driver *drv;
- void *driver_data;
+ struct pci_dev *dev;
+ struct pci_driver *drv;
+ void *driver_data;
};

struct pci_device_id {
- unsigned int vendor, device;
- unsigned int subvendor, subdevice;
- unsigned int class, class_mask;
- unsigned long driver_data;
+ unsigned int vendor, device;
+ unsigned int subvendor, subdevice;
+ unsigned int class, class_mask;
+ unsigned long driver_data;
};

struct pci_driver {
- struct list_head node;
- struct pci_dev *dev;
- char *name;
- const struct pci_device_id *id_table; /* NULL if wants all devices */
- int (*probe)(struct pci_dev *dev, const struct pci_device_id *id); /* New device inserted */
- void (*remove)(struct pci_dev *dev); /* Device removed (NULL if not a hot-plug capable driver) */
- void (*suspend)(struct pci_dev *dev); /* Device suspended */
- void (*resume)(struct pci_dev *dev); /* Device woken up */
+ struct list_head node;
+ struct pci_dev *dev;
+ char *name;
+ const struct pci_device_id *id_table; /* NULL if wants all devices */
+ int (*probe)(struct pci_dev *dev, const struct pci_device_id *id); /* New device inserted */
+ void (*remove)(struct pci_dev *dev); /* Device removed (NULL if not a hot-plug capable driver) */
+ void (*suspend)(struct pci_dev *dev); /* Device suspended */
+ void (*resume)(struct pci_dev *dev); /* Device woken up */
};

#define PCI_MAX_MAPPINGS 16
static struct pci_driver_mapping drvmap [PCI_MAX_MAPPINGS] = { { NULL, } , };

-#define __devinit
-#define __devinitdata
+#define __devinit __init
+#define __devinitdata __initdata
#define __devexit
#define MODULE_DEVICE_TABLE(foo,bar)
+#define SET_MODULE_OWNER(dev)
+#define COMPAT_MOD_INC_USE_COUNT MOD_INC_USE_COUNT
+#define COMPAT_MOD_DEC_USE_COUNT MOD_DEC_USE_COUNT
#define PCI_ANY_ID (~0)
-#define IORESOURCE_MEM 2
-#define PCI_DMA_FROMDEVICE 0
-#define PCI_DMA_TODEVICE 0
+#define IORESOURCE_MEM 2
+#define PCI_DMA_FROMDEVICE 0
+#define PCI_DMA_TODEVICE 0

#define request_mem_region(addr, size, name) ((void *)1)
#define release_mem_region(addr, size)
#define del_timer_sync(timer) del_timer(timer)

static inline void *pci_alloc_consistent(struct pci_dev *hwdev, size_t size,
- dma_addr_t *dma_handle)
+ dma_addr_t *dma_handle)
{
void *virt_ptr;

@@ -222,7 +326,7 @@
#define pci_map_single(cookie, address, size, dir) virt_to_bus(address)
#define pci_unmap_single(cookie, address, size, dir)
#define pci_dma_sync_single(cookie, address, size, dir)
-#undef pci_resource_flags
+#undef pci_resource_flags
#define pci_resource_flags(dev, i) \
((dev->base_address[i] & IORESOURCE_IO) ? IORESOURCE_IO : IORESOURCE_MEM)

@@ -282,19 +386,22 @@
id = NULL;

found = 0;
- for (i = 0; i < PCI_MAX_MAPPINGS && !found; i++)
+ for (i = 0; i < PCI_MAX_MAPPINGS; i++)
if (!drvmap[i].dev) {
drvmap[i].dev = dev;
drvmap[i].drv = drv;
found = 1;
+ break;
}

- if (drv->probe(dev, id) >= 0) {
- if(found)
- return 1;
- } else {
- drvmap[i - 1].dev = NULL;
- }
+ if (!found)
+ return 0;
+
+ if (drv->probe(dev, id) >= 0)
+ return 1;
+
+ /* clean up */
+ drvmap[i].dev = NULL;
return 0;
}

@@ -326,13 +433,15 @@
list_del(&drv->node);
for (dev = pci_devices; dev; dev = dev->next) {
found = 0;
- for (i = 0; i < PCI_MAX_MAPPINGS && !found; i++)
- if (drvmap[i].dev == dev)
+ for (i = 0; i < PCI_MAX_MAPPINGS; i++)
+ if (drvmap[i].dev == dev) {
found = 1;
+ break;
+ }
if (found) {
if (drv->remove)
drv->remove(dev);
- drvmap[i - 1].dev = NULL;
+ drvmap[i].dev = NULL;
}
}
#endif
@@ -348,116 +457,46 @@

static inline int pci_module_init(struct pci_driver *drv)
{
- int rc = pci_register_driver (drv);
-
- if (rc > 0)
+ if (pci_register_driver(drv))
return 0;
-
- /* if we get here, we need to clean up pci driver instance
- * and return some sort of error */
- pci_unregister_driver (drv);
-
return -ENODEV;
}

-#endif /* LINUX_VERSION_CODE < 0x20300 */
-
-#ifdef HAS_FIRMWARE
-#include "starfire_firmware.h"
-#endif /* HAS_FIRMWARE */
+static struct pci_driver starfire_driver;

-MODULE_AUTHOR("Donald Becker <[email protected]>");
-MODULE_DESCRIPTION("Adaptec Starfire Ethernet driver");
-MODULE_PARM(max_interrupt_work, "i");
-MODULE_PARM(mtu, "i");
-MODULE_PARM(debug, "i");
-MODULE_PARM(rx_copybreak, "i");
-MODULE_PARM(interrupt_mitigation, "i");
-MODULE_PARM(options, "1-" __MODULE_STRING(MAX_UNITS) "i");
-MODULE_PARM(full_duplex, "1-" __MODULE_STRING(MAX_UNITS) "i");
-
-/*
- Theory of Operation
-
-I. Board Compatibility
-
-This driver is for the Adaptec 6915 "Starfire" 64 bit PCI Ethernet adapter.
-
-II. Board-specific settings
-
-III. Driver operation
-
-IIIa. Ring buffers
-
-The Starfire hardware uses multiple fixed-size descriptor queues/rings. The
-ring sizes are set fixed by the hardware, but may optionally be wrapped
-earlier by the END bit in the descriptor.
-This driver uses that hardware queue size for the Rx ring, where a large
-number of entries has no ill effect beyond increases the potential backlog.
-The Tx ring is wrapped with the END bit, since a large hardware Tx queue
-disables the queue layer priority ordering and we have no mechanism to
-utilize the hardware two-level priority queue. When modifying the
-RX/TX_RING_SIZE pay close attention to page sizes and the ring-empty warning
-levels.
-
-IIIb/c. Transmit/Receive Structure
-
-See the Adaptec manual for the many possible structures, and options for
-each structure. There are far too many to document here.
-
-For transmit this driver uses type 0/1 transmit descriptors (depending
-on the presence of the zerocopy patches), and relies on automatic
-minimum-length padding. It does not use the completion queue
-consumer index, but instead checks for non-zero status entries.
-
-For receive this driver uses type 0 receive descriptors. The driver
-allocates full frame size skbuffs for the Rx ring buffers, so all frames
-should fit in a single descriptor. The driver does not use the completion
-queue consumer index, but instead checks for non-zero status entries.
-
-When an incoming frame is less than RX_COPYBREAK bytes long, a fresh skbuff
-is allocated and the frame is copied to the new skbuff. When the incoming
-frame is larger, the skbuff is passed directly up the protocol stack.
-Buffers consumed this way are replaced by newly allocated skbuffs in a
-later phase of receive.
-
-A notable aspect of operation is that unaligned buffers are not permitted by
-the Starfire hardware. The IP header at offset 14 in an ethernet frame thus
-isn't longword aligned, which may cause problems on some machine
-e.g. Alphas and IA64. For these architectures, the driver is forced to copy
-the frame into a new skbuff unconditionally. Copied frames are put into the
-skbuff at an offset of "+2", thus 16-byte aligning the IP header.
-
-IIId. Synchronization
+int __init starfire_probe(struct net_device *dev)
+{
+ static int __initdata probed = 0;

-The driver runs as two independent, single-threaded flows of control. One
-is the send-packet routine, which enforces single-threaded use by the
-dev->tbusy flag. The other thread is the interrupt handler, which is single
-threaded by the hardware and interrupt handling software.
+ if (probed)
+ return -ENODEV;
+ probed++;

-The send packet thread has partial control over the Tx ring and 'dev->tbusy'
-flag. It sets the tbusy flag whenever it's queuing a Tx packet. If the next
-queue slot is empty, it clears the tbusy flag when finished otherwise it sets
-the 'lp->tx_full' flag.
+ return pci_module_init(&starfire_driver);
+}

-The interrupt handler has exclusive control over the Rx ring and records stats
-from the Tx ring. After reaping the stats, it marks the Tx queue entry as
-empty by incrementing the dirty_tx mark. Iff the 'lp->tx_full' flag is set, it
-clears both the tx_full and tbusy flags.
+#define init_tx_timer(dev, func, timeout)
+#define kick_tx_timer(dev, func, timeout) \
+ if (netif_queue_stopped(dev)) { \
+ /* If this happens network layer tells us we're broken. */ \
+ if (jiffies - dev->trans_start > timeout) \
+ func(dev); \
+ }

-IV. Notes
+#else /* LINUX_VERSION_CODE > 0x20300 */

-IVb. References
+#define COMPAT_MOD_INC_USE_COUNT
+#define COMPAT_MOD_DEC_USE_COUNT

-The Adaptec Starfire manuals, available only from Adaptec.
-http://www.scyld.com/expert/100mbps.html
-http://www.scyld.com/expert/NWay.html
+#define init_tx_timer(dev, func, timeout) \
+ dev->tx_timeout = func; \
+ dev->watchdog_timeo = timeout;
+#define kick_tx_timer(dev, func, timeout)

-IVc. Errata

-*/
+#endif /* LINUX_VERSION_CODE > 0x20300 */
+/* end of compatibility code */

-

enum chip_capability_flags {CanHaveMII=1, };
#define PCI_IOTYPE (PCI_USES_MASTER | PCI_USES_MEM | PCI_ADDR0)
@@ -574,7 +613,7 @@

/* The Rx and Tx buffer descriptors. */
struct starfire_rx_desc {
- u32 rxaddr; /* Optionally 64 bits. */
+ u32 rxaddr; /* Optionally 64 bits. */
};
enum rx_desc_bits {
RxDescValid=1, RxDescEndRing=2,
@@ -587,14 +626,14 @@
#define csum_rx_status
#endif /* HAS_FIRMWARE */
struct rx_done_desc {
- u32 status; /* Low 16 bits is length. */
+ u32 status; /* Low 16 bits is length. */
#ifdef csum_rx_status
- u32 status2; /* Low 16 bits is csum */
+ u32 status2; /* Low 16 bits is csum */
#endif /* csum_rx_status */
#ifdef full_rx_status
u32 status2;
u16 vlanid;
- u16 csum; /* partial checksum */
+ u16 csum; /* partial checksum */
u32 timestamp;
#endif /* full_rx_status */
};
@@ -602,41 +641,24 @@
RxOK=0x20000000, RxFIFOErr=0x10000000, RxBufQ2=0x08000000,
};

-#ifdef ZEROCOPY
-/* Type 0 Tx descriptor. */
-/* If more fragments are needed, don't forget to change the
- descriptor spacing as well! */
-struct starfire_tx_desc {
- u32 status;
- u32 nbufs;
- u32 first_addr;
- u16 first_len;
- u16 total_len;
- struct {
- u32 addr;
- u32 len;
- } frag[6];
-};
-#else /* not ZEROCOPY */
/* Type 1 Tx descriptor. */
struct starfire_tx_desc {
- u32 status; /* Upper bits are status, lower 16 length. */
+ u32 status; /* Upper bits are status, lower 16 length. */
u32 first_addr;
};
-#endif /* not ZEROCOPY */
enum tx_desc_bits {
TxDescID=0xB0000000,
TxCRCEn=0x01000000, TxDescIntr=0x08000000,
TxRingWrap=0x04000000, TxCalTCP=0x02000000,
};
struct tx_done_report {
- u32 status; /* timestamp, index. */
+ u32 status; /* timestamp, index. */
#if 0
- u32 intrstatus; /* interrupt status */
+ u32 intrstatus; /* interrupt status */
#endif
};

-#define PRIV_ALIGN 15 /* Required alignment mask */
+#define PRIV_ALIGN 15 /* Required alignment mask */
struct rx_ring_info {
struct sk_buff *skb;
dma_addr_t mapping;
@@ -644,9 +666,6 @@
struct tx_ring_info {
struct sk_buff *skb;
dma_addr_t first_mapping;
-#ifdef ZEROCOPY
- dma_addr_t frag_mapping[6];
-#endif /* ZEROCOPY */
};

struct netdev_private {
@@ -670,45 +689,45 @@
struct timer_list timer; /* Media monitoring timer. */
struct pci_dev *pci_dev;
/* Frequently used values: keep some adjacent for cache effect. */
- unsigned int cur_rx, dirty_rx; /* Producer/consumer ring indices */
+ unsigned int cur_rx, dirty_rx; /* Producer/consumer ring indices */
unsigned int cur_tx, dirty_tx;
- unsigned int rx_buf_sz; /* Based on MTU+slack. */
- unsigned int tx_full:1; /* The Tx queue is full. */
+ unsigned int rx_buf_sz; /* Based on MTU+slack. */
+ unsigned int tx_full:1; /* The Tx queue is full. */
/* These values are keep track of the transceiver/media in use. */
- unsigned int full_duplex:1, /* Full-duplex operation requested. */
- medialock:1, /* Xcvr set to fixed speed/duplex. */
+ unsigned int full_duplex:1, /* Full-duplex operation requested. */
+ medialock:1, /* Xcvr set to fixed speed/duplex. */
rx_flowctrl:1,
- tx_flowctrl:1; /* Use 802.3x flow control. */
- unsigned int default_port:4; /* Last dev->if_port value. */
+ tx_flowctrl:1; /* Use 802.3x flow control. */
+ unsigned int default_port:4; /* Last dev->if_port value. */
u32 tx_mode;
u8 tx_threshold;
/* MII transceiver section. */
- int mii_cnt; /* MII device addresses. */
- u16 advertising; /* NWay media advertisement */
- unsigned char phys[2]; /* MII device addresses. */
-};
-
-static int mdio_read(struct net_device *dev, int phy_id, int location);
-static void mdio_write(struct net_device *dev, int phy_id, int location, int value);
-static int netdev_open(struct net_device *dev);
-static void check_duplex(struct net_device *dev, int startup);
-static void netdev_timer(unsigned long data);
-static void tx_timeout(struct net_device *dev);
-static void init_ring(struct net_device *dev);
-static int start_tx(struct sk_buff *skb, struct net_device *dev);
-static void intr_handler(int irq, void *dev_instance, struct pt_regs *regs);
-static void netdev_error(struct net_device *dev, int intr_status);
-static int netdev_rx(struct net_device *dev);
-static void netdev_error(struct net_device *dev, int intr_status);
-static void set_rx_mode(struct net_device *dev);
+ int mii_cnt; /* MII device addresses. */
+ u16 advertising; /* NWay media advertisement */
+ unsigned char phys[2]; /* MII device addresses. */
+};
+
+static int mdio_read(struct net_device *dev, int phy_id, int location);
+static void mdio_write(struct net_device *dev, int phy_id, int location, int value);
+static int netdev_open(struct net_device *dev);
+static void check_duplex(struct net_device *dev, int startup);
+static void netdev_timer(unsigned long data);
+static void tx_timeout(struct net_device *dev);
+static void init_ring(struct net_device *dev);
+static int start_tx(struct sk_buff *skb, struct net_device *dev);
+static void intr_handler(int irq, void *dev_instance, struct pt_regs *regs);
+static void netdev_error(struct net_device *dev, int intr_status);
+static int netdev_rx(struct net_device *dev);
+static void netdev_error(struct net_device *dev, int intr_status);
+static void set_rx_mode(struct net_device *dev);
static struct net_device_stats *get_stats(struct net_device *dev);
-static int mii_ioctl(struct net_device *dev, struct ifreq *rq, int cmd);
-static int netdev_close(struct net_device *dev);
+static int mii_ioctl(struct net_device *dev, struct ifreq *rq, int cmd);
+static int netdev_close(struct net_device *dev);



-static int __devinit starfire_init_one (struct pci_dev *pdev,
- const struct pci_device_id *ent)
+static int __devinit starfire_init_one(struct pci_dev *pdev,
+ const struct pci_device_id *ent)
{
struct netdev_private *np;
int i, irq, option, chip_idx = ent->driver_data;
@@ -717,13 +736,14 @@
static int printed_version = 0;
long ioaddr;
int drv_flags, io_size;
+ int boguscnt;

card_idx++;
option = card_idx < MAX_UNITS ? options[card_idx] : 0;

if (!printed_version++)
printk(KERN_INFO "%s" KERN_INFO "%s" KERN_INFO "%s",
- version1, version2, version3);
+ version1, version2, version3);

if (pci_enable_device (pdev))
return -EIO;
@@ -734,21 +754,22 @@
printk (KERN_ERR "starfire %d: no PCI MEM resources, aborting\n", card_idx);
return -ENODEV;
}
-
+
dev = init_etherdev(NULL, sizeof(*np));
if (!dev) {
printk (KERN_ERR "starfire %d: cannot alloc etherdev, aborting\n", card_idx);
return -ENOMEM;
}
-
- irq = pdev->irq;
+ SET_MODULE_OWNER(dev);
+
+ irq = pdev->irq;

if (request_mem_region (ioaddr, io_size, dev->name) == NULL) {
printk (KERN_ERR "starfire %d: resource 0x%x @ 0x%lx busy, aborting\n",
card_idx, io_size, ioaddr);
goto err_out_free_netdev;
}
-
+
ioaddr = (long) ioremap (ioaddr, io_size);
if (!ioaddr) {
printk (KERN_ERR "starfire %d: cannot remap 0x%x @ 0x%lx, aborting\n",
@@ -757,34 +778,42 @@
}

pci_set_master (pdev);
-
+
printk(KERN_INFO "%s: %s at 0x%lx, ",
dev->name, netdrv_tbl[chip_idx].name, ioaddr);

-#ifdef ZEROCOPY
- /* Starfire can do SG and TCP/UDP checksumming */
- dev->features |= NETIF_F_SG;
-#ifdef HAS_FIRMWARE
- dev->features |= NETIF_F_IP_CSUM;
-#endif /* HAS_FIRMWARE */
-#endif /* ZEROCOPY */
-
/* Serial EEPROM reads are hidden by the hardware. */
for (i = 0; i < 6; i++)
dev->dev_addr[i] = readb(ioaddr + EEPROMCtrl + 20-i);
for (i = 0; i < 5; i++)
- printk("%2.2x:", dev->dev_addr[i]);
+ printk("%2.2x:", dev->dev_addr[i]);
printk("%2.2x, IRQ %d.\n", dev->dev_addr[i], irq);

#if ! defined(final_version) /* Dump the EEPROM contents during development. */
if (debug > 4)
for (i = 0; i < 0x20; i++)
- printk("%2.2x%s", (unsigned int)readb(ioaddr + EEPROMCtrl + i),
- i % 16 != 15 ? " " : "\n");
+ printk("%2.2x%s",
+ (unsigned int)readb(ioaddr + EEPROMCtrl + i),
+ i % 16 != 15 ? " " : "\n");
#endif

+ /* Issue soft reset */
+ writel(0x8000, ioaddr + TxMode);
+ udelay(1000);
+ writel(0, ioaddr + TxMode);
+
/* Reset the chip to erase previous misconfiguration. */
writel(1, ioaddr + PCIDeviceConfig);
+ boguscnt = 1000;
+ while (--boguscnt > 0) {
+ udelay(10);
+ if ((readl(ioaddr + PCIDeviceConfig) & 1) == 0)
+ break;
+ }
+ if (boguscnt == 0)
+ printk("%s: chipset reset never completed!\n", dev->name);
+ /* wait a little longer */
+ udelay(1000);

dev->base_addr = ioaddr;
dev->irq = irq;
@@ -806,7 +835,7 @@
if (np->default_port)
np->medialock = 1;
}
- if (card_idx < MAX_UNITS && full_duplex[card_idx] > 0)
+ if (card_idx < MAX_UNITS && full_duplex[card_idx] > 0)
np->full_duplex = 1;

if (np->full_duplex)
@@ -815,6 +844,7 @@
/* The chip-specific entries in the device structure. */
dev->open = &netdev_open;
dev->hard_start_xmit = &start_tx;
+ init_tx_timer(dev, tx_timeout, TX_TIMEOUT);
dev->stop = &netdev_close;
dev->get_stats = &get_stats;
dev->set_multicast_list = &set_rx_mode;
@@ -825,14 +855,27 @@

if (drv_flags & CanHaveMII) {
int phy, phy_idx = 0;
+ int mii_status;
for (phy = 0; phy < 32 && phy_idx < 4; phy++) {
- int mii_status = mdio_read(dev, phy, 1);
- if (mii_status != 0xffff && mii_status != 0x0000) {
+ mdio_write(dev, phy, 0, 0x8000);
+ udelay(500);
+ boguscnt = 1000;
+ while (--boguscnt > 0)
+ if ((mdio_read(dev, phy, 0) & 0x8000) == 0)
+ break;
+ if (boguscnt == 0) {
+ printk("%s: PHY reset never completed!\n", dev->name);
+ continue;
+ }
+ mii_status = mdio_read(dev, phy, 1);
+ if (mii_status != 0x0000) {
np->phys[phy_idx++] = phy;
np->advertising = mdio_read(dev, phy, 4);
printk(KERN_INFO "%s: MII PHY found at address %d, status "
"0x%4.4x advertising %4.4x.\n",
dev->name, phy, mii_status, np->advertising);
+ /* there can be only one PHY on-board */
+ break;
}
}
np->mii_cnt = phy_idx;
@@ -858,7 +901,11 @@
/* ??? Should we add a busy-wait here? */
do
result = readl(mdio_addr);
- while ((result & 0xC0000000) != 0x80000000 && --boguscnt >= 0);
+ while ((result & 0xC0000000) != 0x80000000 && --boguscnt > 0);
+ if (boguscnt == 0)
+ return 0;
+ if ((result & 0xffff) == 0xffff)
+ return 0;
return result & 0xffff;
}

@@ -879,11 +926,11 @@

/* Do we ever need to reset the chip??? */

- MOD_INC_USE_COUNT;
+ COMPAT_MOD_INC_USE_COUNT;

retval = request_irq(dev->irq, &intr_handler, SA_SHIRQ, dev->name, dev);
if (retval) {
- MOD_DEC_USE_COUNT;
+ COMPAT_MOD_DEC_USE_COUNT;
return retval;
}

@@ -892,7 +939,7 @@
writel(1, ioaddr + PCIDeviceConfig);
if (debug > 1)
printk(KERN_DEBUG "%s: netdev_open() irq %d.\n",
- dev->name, dev->irq);
+ dev->name, dev->irq);
/* Allocate the various queues, failing gracefully. */
if (np->tx_done_q == 0)
np->tx_done_q = pci_alloc_consistent(np->pci_dev, PAGE_SIZE, &np->tx_done_q_dma);
@@ -902,47 +949,38 @@
np->tx_ring = pci_alloc_consistent(np->pci_dev, PAGE_SIZE, &np->tx_ring_dma);
if (np->rx_ring == 0)
np->rx_ring = pci_alloc_consistent(np->pci_dev, PAGE_SIZE, &np->rx_ring_dma);
- if (np->tx_done_q == 0 || np->rx_done_q == 0
- || np->rx_ring == 0 || np->tx_ring == 0) {
+ if (np->tx_done_q == 0 || np->rx_done_q == 0
+ || np->rx_ring == 0 || np->tx_ring == 0) {
if (np->tx_done_q)
pci_free_consistent(np->pci_dev, PAGE_SIZE,
- np->tx_done_q, np->tx_done_q_dma);
+ np->tx_done_q, np->tx_done_q_dma);
if (np->rx_done_q)
pci_free_consistent(np->pci_dev, sizeof(struct rx_done_desc) * DONE_Q_SIZE,
- np->rx_done_q, np->rx_done_q_dma);
+ np->rx_done_q, np->rx_done_q_dma);
if (np->tx_ring)
pci_free_consistent(np->pci_dev, PAGE_SIZE,
- np->tx_ring, np->tx_ring_dma);
+ np->tx_ring, np->tx_ring_dma);
if (np->rx_ring)
pci_free_consistent(np->pci_dev, PAGE_SIZE,
- np->rx_ring, np->rx_ring_dma);
- MOD_DEC_USE_COUNT;
+ np->rx_ring, np->rx_ring_dma);
+ COMPAT_MOD_DEC_USE_COUNT;
return -ENOMEM;
}

init_ring(dev);
/* Set the size of the Rx buffers. */
writel((np->rx_buf_sz << RxBufferLenShift) |
- (0 << RxMinDescrThreshShift) |
- RxPrefetchMode | RxVariableQ |
- RxDescSpace4,
- ioaddr + RxDescQCtrl);
+ (0 << RxMinDescrThreshShift) |
+ RxPrefetchMode | RxVariableQ |
+ RxDescSpace4,
+ ioaddr + RxDescQCtrl);

-#ifdef ZEROCOPY
- /* Set Tx descriptor to type 0 and spacing to 64 bytes. */
- writel((2 << TxHiPriFIFOThreshShift) |
- (0 << TxPadLenShift) |
- (4 << TxDMABurstSizeShift) |
- TxDescSpace64 | TxDescType0,
- ioaddr + TxDescCtrl);
-#else /* not ZEROCOPY */
/* Set Tx descriptor to type 1 and padding to 0 bytes. */
writel((2 << TxHiPriFIFOThreshShift) |
(0 << TxPadLenShift) |
(4 << TxDMABurstSizeShift) |
TxDescSpaceUnlim | TxDescType1,
ioaddr + TxDescCtrl);
-#endif /* not ZEROCOPY */

#if defined(ADDR_64BITS) && defined(__alpha__)
/* XXX We really need a 64-bit PCI dma interfaces too... -DaveM */
@@ -959,25 +997,25 @@
writel(np->tx_done_q_dma, ioaddr + TxCompletionAddr);
#ifdef full_rx_status
writel(np->rx_done_q_dma |
- RxComplType3 |
- (0 << RxComplThreshShift),
- ioaddr + RxCompletionAddr);
+ RxComplType3 |
+ (0 << RxComplThreshShift),
+ ioaddr + RxCompletionAddr);
#else /* not full_rx_status */
#ifdef csum_rx_status
writel(np->rx_done_q_dma |
- RxComplType2 |
- (0 << RxComplThreshShift),
- ioaddr + RxCompletionAddr);
+ RxComplType2 |
+ (0 << RxComplThreshShift),
+ ioaddr + RxCompletionAddr);
#else /* not csum_rx_status */
writel(np->rx_done_q_dma |
- RxComplType0 |
- (0 << RxComplThreshShift),
- ioaddr + RxCompletionAddr);
+ RxComplType0 |
+ (0 << RxComplThreshShift),
+ ioaddr + RxCompletionAddr);
#endif /* not csum_rx_status */
#endif /* not full_rx_status */

if (debug > 1)
- printk(KERN_DEBUG "%s: Filling in the station address.\n", dev->name);
+ printk(KERN_DEBUG "%s: Filling in the station address.\n", dev->name);

/* Fill both the unused Tx SA register and the Rx perfect filter. */
for (i = 0; i < 6; i++)
@@ -1003,7 +1041,7 @@
netif_start_queue(dev);

if (debug > 1)
- printk(KERN_DEBUG "%s: Setting the Rx and Tx modes.\n", dev->name);
+ printk(KERN_DEBUG "%s: Setting the Rx and Tx modes.\n", dev->name);
set_rx_mode(dev);

np->advertising = mdio_read(dev, np->phys[0], 4);
@@ -1016,7 +1054,7 @@
IntrRxGFPDead | IntrNoTxCsum | IntrTxBadID,
ioaddr + IntrEnable);
writel(0x00800000 | readl(ioaddr + PCIDeviceConfig),
- ioaddr + PCIDeviceConfig);
+ ioaddr + PCIDeviceConfig);

#ifdef HAS_FIRMWARE
/* Load Rx/Tx firmware into the frame processors */
@@ -1033,7 +1071,7 @@

if (debug > 2)
printk(KERN_DEBUG "%s: Done netdev_open().\n",
- dev->name);
+ dev->name);

/* Set the timer to check for link beat. */
init_timer(&np->timer);
@@ -1066,8 +1104,8 @@
np->full_duplex = duplex;
if (debug > 1)
printk(KERN_INFO "%s: Setting %s-duplex based on MII #%d"
- " negotiated capability %4.4x.\n", dev->name,
- duplex ? "full" : "half", np->phys[0], negotiated);
+ " negotiated capability %4.4x.\n", dev->name,
+ duplex ? "full" : "half", np->phys[0], negotiated);
}
}
if (new_tx_mode != np->tx_mode) {
@@ -1086,7 +1124,7 @@

if (debug > 3) {
printk(KERN_DEBUG "%s: Media selection timer tick, status %8.8x.\n",
- dev->name, (int)readl(ioaddr + IntrStatus));
+ dev->name, (int)readl(ioaddr + IntrStatus));
}
check_duplex(dev, 0);
#if ! defined(final_version)
@@ -1096,7 +1134,7 @@
/* Bogus hardware IRQ: Fake an interrupt handler call. */
if (new_status & 1) {
printk(KERN_ERR "%s: Interrupt blocked, status %8.8x/%8.8x.\n",
- dev->name, new_status, (int)readl(ioaddr + IntrStatus));
+ dev->name, new_status, (int)readl(ioaddr + IntrStatus));
intr_handler(dev->irq, dev, 0);
}
}
@@ -1112,7 +1150,7 @@
long ioaddr = dev->base_addr;

printk(KERN_WARNING "%s: Transmit timed out, status %8.8x,"
- " resetting...\n", dev->name, (int)readl(ioaddr + IntrStatus));
+ " resetting...\n", dev->name, (int)readl(ioaddr + IntrStatus));

#ifndef __alpha__
{
@@ -1183,13 +1221,6 @@
for (i = 0; i < TX_RING_SIZE; i++) {
np->tx_info[i].skb = NULL;
np->tx_info[i].first_mapping = 0;
-#ifdef ZEROCOPY
- {
- int j;
- for (j = 0; j < 6; j++)
- np->tx_info[i].frag_mapping[j] = 0;
- }
-#endif /* ZEROCOPY */
np->tx_ring[i].status = 0;
}
return;
@@ -1199,15 +1230,8 @@
{
struct netdev_private *np = dev->priv;
unsigned int entry;
-#ifdef ZEROCOPY
- int i;
-#endif

- if (netif_queue_stopped(dev)) {
- /* If this happens network layer tells us we're broken. */
- if (jiffies - dev->trans_start > TX_TIMEOUT)
- tx_timeout(dev);
- }
+ kick_tx_timer(dev, tx_timeout, TX_TIMEOUT);

/* Caution: the write order is important here, set the field
with the "ownership" bits last. */
@@ -1215,80 +1239,22 @@
/* Calculate the next Tx descriptor entry. */
entry = np->cur_tx % TX_RING_SIZE;

-#if defined(ZEROCOPY) && defined(HAS_FIRMWARE) && defined(HAS_BROKEN_FIRMWARE)
- {
- int has_bad_length = 0;
-
- if (skb_first_frag_len(skb) == 1)
- has_bad_length = 1;
- else {
- for (i = 0; i < skb_shinfo(skb)->nr_frags; i++)
- if (skb_shinfo(skb)->frags[i].size == 1) {
- has_bad_length = 1;
- break;
- }
- }
-
- if (has_bad_length)
- skb_checksum_help(skb);
- }
-#endif /* ZEROCOPY && HAS_FIRMWARE && HAS_BROKEN_FIRMWARE */
-
np->tx_info[entry].skb = skb;
np->tx_info[entry].first_mapping =
pci_map_single(np->pci_dev, skb->data, skb_first_frag_len(skb), PCI_DMA_TODEVICE);

np->tx_ring[entry].first_addr = cpu_to_le32(np->tx_info[entry].first_mapping);
-#ifdef ZEROCOPY
- np->tx_ring[entry].first_len = cpu_to_le32(skb_first_frag_len(skb));
- np->tx_ring[entry].total_len = cpu_to_le32(skb->len);
- /* Add "| TxDescIntr" to generate Tx-done interrupts. */
- np->tx_ring[entry].status = cpu_to_le32(TxDescID | TxCRCEn);
- np->tx_ring[entry].nbufs = cpu_to_le32(skb_shinfo(skb)->nr_frags + 1);
-#else /* not ZEROCOPY */
- /* Add "| TxDescIntr" to generate Tx-done interrupts. */
+ /* Add "| TxDescIntr" to generate Tx-done interrupts. */
np->tx_ring[entry].status = cpu_to_le32(skb->len | TxDescID | TxCRCEn | 1 << 16);
-#endif /* not ZEROCOPY */

if (entry >= TX_RING_SIZE-1) /* Wrap ring */
np->tx_ring[entry].status |= cpu_to_le32(TxRingWrap | TxDescIntr);

- /* not ifdef'ed, but shouldn't happen without ZEROCOPY */
- if (skb->ip_summed == CHECKSUM_HW)
- np->tx_ring[entry].status |= cpu_to_le32(TxCalTCP);
-
if (debug > 5) {
-#ifdef ZEROCOPY
- printk(KERN_DEBUG "%s: Tx #%d slot %d status %8.8x nbufs %d len %4.4x/%4.4x.\n",
- dev->name, np->cur_tx, entry,
- le32_to_cpu(np->tx_ring[entry].status),
- le32_to_cpu(np->tx_ring[entry].nbufs),
- le32_to_cpu(np->tx_ring[entry].first_len),
- le32_to_cpu(np->tx_ring[entry].total_len));
-#else /* not ZEROCOPY */
printk(KERN_DEBUG "%s: Tx #%d slot %d status %8.8x.\n",
- dev->name, np->cur_tx, entry,
- le32_to_cpu(np->tx_ring[entry].status));
-#endif /* not ZEROCOPY */
- }
-
-#ifdef ZEROCOPY
- for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
- skb_frag_t *this_frag = &skb_shinfo(skb)->frags[i];
-
- /* we already have the proper value in entry */
- np->tx_info[entry].frag_mapping[i] =
- pci_map_single(np->pci_dev, page_address(this_frag->page) + this_frag->page_offset, this_frag->size, PCI_DMA_TODEVICE);
-
- np->tx_ring[entry].frag[i].addr = cpu_to_le32(np->tx_info[entry].frag_mapping[i]);
- np->tx_ring[entry].frag[i].len = cpu_to_le32(this_frag->size);
- if (debug > 5) {
- printk(KERN_DEBUG "%s: Tx #%d frag %d len %4.4x.\n",
- dev->name, np->cur_tx, i,
- le32_to_cpu(np->tx_ring[entry].frag[i].len));
- }
+ dev->name, np->cur_tx, entry,
+ le32_to_cpu(np->tx_ring[entry].status));
}
-#endif /* ZEROCOPY */

np->cur_tx++;

@@ -1322,11 +1288,12 @@
struct netdev_private *np;
long ioaddr;
int boguscnt = max_interrupt_work;
+ int consumer;
+ int tx_status;

#ifndef final_version /* Can never occur. */
if (dev == NULL) {
- printk (KERN_ERR "Netdev interrupt handler(): IRQ %d for unknown "
- "device.\n", irq);
+ printk (KERN_ERR "Netdev interrupt handler(): IRQ %d for unknown device.\n", irq);
return;
}
#endif
@@ -1339,7 +1306,7 @@

if (debug > 4)
printk(KERN_DEBUG "%s: Interrupt status %4.4x.\n",
- dev->name, intr_status);
+ dev->name, intr_status);

if (intr_status == 0)
break;
@@ -1350,63 +1317,48 @@
/* Scavenge the skbuff list based on the Tx-done queue.
There are redundant checks here that may be cleaned up
after the driver has proven to be reliable. */
- {
- int consumer = readl(ioaddr + TxConsumerIdx);
- int tx_status;
- if (debug > 4)
- printk(KERN_DEBUG "%s: Tx Consumer index is %d.\n",
- dev->name, consumer);
+ consumer = readl(ioaddr + TxConsumerIdx);
+ if (debug > 4)
+ printk(KERN_DEBUG "%s: Tx Consumer index is %d.\n",
+ dev->name, consumer);
#if 0
- if (np->tx_done >= 250 || np->tx_done == 0)
- printk(KERN_DEBUG "%s: Tx completion entry %d is %8.8x, "
- "%d is %8.8x.\n", dev->name,
- np->tx_done, le32_to_cpu(np->tx_done_q[np->tx_done].status),
- (np->tx_done+1) & (DONE_Q_SIZE-1),
- le32_to_cpu(np->tx_done_q[(np->tx_done+1)&(DONE_Q_SIZE-1)].status));
+ if (np->tx_done >= 250 || np->tx_done == 0)
+ printk(KERN_DEBUG "%s: Tx completion entry %d is %8.8x, %d is %8.8x.\n",
+ dev->name, np->tx_done,
+ le32_to_cpu(np->tx_done_q[np->tx_done].status),
+ (np->tx_done+1) & (DONE_Q_SIZE-1),
+ le32_to_cpu(np->tx_done_q[(np->tx_done+1)&(DONE_Q_SIZE-1)].status));
#endif
- while ((tx_status = le32_to_cpu(np->tx_done_q[np->tx_done].status))
- != 0) {
- if (debug > 4)
- printk(KERN_DEBUG "%s: Tx completion entry %d is %8.8x.\n",
- dev->name, np->tx_done, tx_status);
- if ((tx_status & 0xe0000000) == 0xa0000000) {
- np->stats.tx_packets++;
- } else if ((tx_status & 0xe0000000) == 0x80000000) {
- struct sk_buff *skb;
-#ifdef ZEROCOPY
- int i;
-#endif /* ZEROCOPY */
- u16 entry = tx_status; /* Implicit truncate */
- entry /= sizeof(struct starfire_tx_desc);
-
- skb = np->tx_info[entry].skb;
- np->tx_info[entry].skb = NULL;
- pci_unmap_single(np->pci_dev,
- np->tx_info[entry].first_mapping,
- skb_first_frag_len(skb),
- PCI_DMA_TODEVICE);
- np->tx_info[entry].first_mapping = 0;
-
-#ifdef ZEROCOPY
- for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
- pci_unmap_single(np->pci_dev,
- np->tx_info[entry].frag_mapping[i],
- skb_shinfo(skb)->frags[i].size,
- PCI_DMA_TODEVICE);
- np->tx_info[entry].frag_mapping[i] = 0;
- }
-#endif /* ZEROCOPY */

- /* Scavenge the descriptor. */
- dev_kfree_skb_irq(skb);
+ while ((tx_status = le32_to_cpu(np->tx_done_q[np->tx_done].status)) != 0) {
+ if (debug > 4)
+ printk(KERN_DEBUG "%s: Tx completion entry %d is %8.8x.\n",
+ dev->name, np->tx_done, tx_status);
+ if ((tx_status & 0xe0000000) == 0xa0000000) {
+ np->stats.tx_packets++;
+ } else if ((tx_status & 0xe0000000) == 0x80000000) {
+ struct sk_buff *skb;
+ u16 entry = tx_status; /* Implicit truncate */
+ entry /= sizeof(struct starfire_tx_desc);
+
+ skb = np->tx_info[entry].skb;
+ np->tx_info[entry].skb = NULL;
+ pci_unmap_single(np->pci_dev,
+ np->tx_info[entry].first_mapping,
+ skb_first_frag_len(skb),
+ PCI_DMA_TODEVICE);
+ np->tx_info[entry].first_mapping = 0;

- np->dirty_tx++;
- }
- np->tx_done_q[np->tx_done].status = 0;
- np->tx_done = (np->tx_done+1) & (DONE_Q_SIZE-1);
+ /* Scavenge the descriptor. */
+ dev_kfree_skb_irq(skb);
+
+ np->dirty_tx++;
}
- writew(np->tx_done, ioaddr + CompletionQConsumerIdx + 2);
+ np->tx_done_q[np->tx_done].status = 0;
+ np->tx_done = (np->tx_done+1) & (DONE_Q_SIZE-1);
}
+ writew(np->tx_done, ioaddr + CompletionQConsumerIdx + 2);
+
if (np->tx_full && np->cur_tx - np->dirty_tx < TX_RING_SIZE - 4) {
/* The ring is no longer full, wake the queue. */
np->tx_full = 0;
@@ -1419,23 +1371,23 @@

if (--boguscnt < 0) {
printk(KERN_WARNING "%s: Too much work at interrupt, "
- "status=0x%4.4x.\n",
- dev->name, intr_status);
+ "status=0x%4.4x.\n",
+ dev->name, intr_status);
break;
}
} while (1);

if (debug > 4)
printk(KERN_DEBUG "%s: exiting interrupt, status=%#4.4x.\n",
- dev->name, (int)readl(ioaddr + IntrStatus));
+ dev->name, (int)readl(ioaddr + IntrStatus));

#ifndef final_version
/* Code that should never be run! Remove after testing.. */
{
static int stopit = 10;
- if (!netif_running(dev) && --stopit < 0) {
+ if (!netif_running(dev) && --stopit < 0) {
printk(KERN_ERR "%s: Emergency stop, looping startup interrupt.\n",
- dev->name);
+ dev->name);
free_irq(irq, dev);
}
}
@@ -1452,98 +1404,99 @@

if (np->rx_done_q == 0) {
printk(KERN_ERR "%s: rx_done_q is NULL! rx_done is %d. %p.\n",
- dev->name, np->rx_done, np->tx_done_q);
+ dev->name, np->rx_done, np->tx_done_q);
return 0;
}

/* If EOP is set on the next entry, it's a new packet. Send it up. */
while ((desc_status = le32_to_cpu(np->rx_done_q[np->rx_done].status)) != 0) {
+ struct sk_buff *skb;
+ u16 pkt_len;
+ int entry;
+
if (debug > 4)
- printk(KERN_DEBUG " netdev_rx() status of %d was %8.8x.\n",
- np->rx_done, desc_status);
+ printk(KERN_DEBUG " netdev_rx() status of %d was %8.8x.\n", np->rx_done, desc_status);
if (--boguscnt < 0)
break;
if ( ! (desc_status & RxOK)) {
/* There was a error. */
if (debug > 2)
- printk(KERN_DEBUG " netdev_rx() Rx error was %8.8x.\n",
- desc_status);
+ printk(KERN_DEBUG " netdev_rx() Rx error was %8.8x.\n", desc_status);
np->stats.rx_errors++;
if (desc_status & RxFIFOErr)
np->stats.rx_fifo_errors++;
- } else {
- struct sk_buff *skb;
- u16 pkt_len = desc_status; /* Implicitly Truncate */
- int entry = (desc_status >> 16) & 0x7ff;
+ goto next_rx;
+ }
+
+ pkt_len = desc_status; /* Implicitly Truncate */
+ entry = (desc_status >> 16) & 0x7ff;

#ifndef final_version
- if (debug > 4)
- printk(KERN_DEBUG " netdev_rx() normal Rx pkt length %d"
- ", bogus_cnt %d.\n",
- pkt_len, boguscnt);
+ if (debug > 4)
+ printk(KERN_DEBUG " netdev_rx() normal Rx pkt length %d, bogus_cnt %d.\n", pkt_len, boguscnt);
#endif
- /* Check if the packet is long enough to accept without copying
- to a minimally-sized skbuff. */
- if (PKT_SHOULD_COPY(pkt_len)
- && (skb = dev_alloc_skb(pkt_len + 2)) != NULL) {
- skb->dev = dev;
- skb_reserve(skb, 2); /* 16 byte align the IP header */
- pci_dma_sync_single(np->pci_dev,
- np->rx_info[entry].mapping,
- pkt_len, PCI_DMA_FROMDEVICE);
+ /* Check if the packet is long enough to accept without copying
+ to a minimally-sized skbuff. */
+ if (pkt_len < rx_copybreak
+ && (skb = dev_alloc_skb(pkt_len + 2)) != NULL) {
+ skb->dev = dev;
+ skb_reserve(skb, 2); /* 16 byte align the IP header */
+ pci_dma_sync_single(np->pci_dev,
+ np->rx_info[entry].mapping,
+ pkt_len, PCI_DMA_FROMDEVICE);
#if HAS_IP_COPYSUM /* Call copy + cksum if available. */
- eth_copy_and_sum(skb, np->rx_info[entry].skb->tail, pkt_len, 0);
- skb_put(skb, pkt_len);
+ eth_copy_and_sum(skb, np->rx_info[entry].skb->tail, pkt_len, 0);
+ skb_put(skb, pkt_len);
#else
- memcpy(skb_put(skb, pkt_len), np->rx_info[entry].skb->tail,
- pkt_len);
+ memcpy(skb_put(skb, pkt_len), np->rx_info[entry].skb->tail, pkt_len);
#endif
- } else {
- char *temp;
+ } else {
+ char *temp;

- pci_unmap_single(np->pci_dev, np->rx_info[entry].mapping, np->rx_buf_sz, PCI_DMA_FROMDEVICE);
- skb = np->rx_info[entry].skb;
- temp = skb_put(skb, pkt_len);
- np->rx_info[entry].skb = NULL;
- np->rx_info[entry].mapping = 0;
- }
-#ifndef final_version /* Remove after testing. */
- /* You will want this info for the initial debug. */
- if (debug > 5)
- printk(KERN_DEBUG " Rx data %2.2x:%2.2x:%2.2x:%2.2x:%2.2x:"
- "%2.2x %2.2x:%2.2x:%2.2x:%2.2x:%2.2x:%2.2x %2.2x%2.2x "
- "%d.%d.%d.%d.\n",
- skb->data[0], skb->data[1], skb->data[2], skb->data[3],
- skb->data[4], skb->data[5], skb->data[6], skb->data[7],
- skb->data[8], skb->data[9], skb->data[10],
- skb->data[11], skb->data[12], skb->data[13],
- skb->data[14], skb->data[15], skb->data[16],
- skb->data[17]);
+ pci_unmap_single(np->pci_dev, np->rx_info[entry].mapping, np->rx_buf_sz, PCI_DMA_FROMDEVICE);
+ skb = np->rx_info[entry].skb;
+ temp = skb_put(skb, pkt_len);
+ np->rx_info[entry].skb = NULL;
+ np->rx_info[entry].mapping = 0;
+ }
+#ifndef final_version /* Remove after testing. */
+ /* You will want this info for the initial debug. */
+ if (debug > 5)
+ printk(KERN_DEBUG " Rx data %2.2x:%2.2x:%2.2x:%2.2x:%2.2x:"
+ "%2.2x %2.2x:%2.2x:%2.2x:%2.2x:%2.2x:%2.2x %2.2x%2.2x "
+ "%d.%d.%d.%d.\n",
+ skb->data[0], skb->data[1], skb->data[2], skb->data[3],
+ skb->data[4], skb->data[5], skb->data[6], skb->data[7],
+ skb->data[8], skb->data[9], skb->data[10],
+ skb->data[11], skb->data[12], skb->data[13],
+ skb->data[14], skb->data[15], skb->data[16],
+ skb->data[17]);
#endif
- skb->protocol = eth_type_trans(skb, dev);
+ skb->protocol = eth_type_trans(skb, dev);
#if defined(full_rx_status) || defined(csum_rx_status)
- if (le32_to_cpu(np->rx_done_q[np->rx_done].status2) & 0x01000000) {
- skb->ip_summed = CHECKSUM_UNNECESSARY;
- }
- /*
- * This feature doesn't seem to be working, at least
- * with the two firmware versions I have. If the GFP sees
- * a fragment, it either ignores it completely, or reports
- * "bad checksum" on it.
- *
- * Maybe I missed something -- corrections are welcome.
- * Until then, the printk stays. :-) -Ion
- */
- else if (le32_to_cpu(np->rx_done_q[np->rx_done].status2) & 0x00400000) {
- skb->ip_summed = CHECKSUM_HW;
- skb->csum = le32_to_cpu(np->rx_done_q[np->rx_done].status2) & 0xffff;
- printk(KERN_DEBUG "%s: checksum_hw, status2 = %x\n", dev->name, np->rx_done_q[np->rx_done].status2);
- }
-#endif
- netif_rx(skb);
- dev->last_rx = jiffies;
- np->stats.rx_packets++;
+ if (le32_to_cpu(np->rx_done_q[np->rx_done].status2) & 0x01000000) {
+ skb->ip_summed = CHECKSUM_UNNECESSARY;
}
+ /*
+ * This feature doesn't seem to be working, at least
+ * with the two firmware versions I have. If the GFP sees
+ * a fragment, it either ignores it completely, or reports
+ * "bad checksum" on it.
+ *
+ * Maybe I missed something -- corrections are welcome.
+ * Until then, the printk stays. :-) -Ion
+ */
+ else if (le32_to_cpu(np->rx_done_q[np->rx_done].status2) & 0x00400000) {
+ skb->ip_summed = CHECKSUM_HW;
+ skb->csum = le32_to_cpu(np->rx_done_q[np->rx_done].status2) & 0xffff;
+ printk(KERN_DEBUG "%s: checksum_hw, status2 = %x\n", dev->name, np->rx_done_q[np->rx_done].status2);
+ }
+#endif
+ netif_rx(skb);
+ dev->last_rx = jiffies;
+ np->stats.rx_packets++;
+
+next_rx:
np->cur_rx++;
np->rx_done_q[np->rx_done].status = 0;
np->rx_done = (np->rx_done + 1) & (DONE_Q_SIZE-1);
@@ -1558,10 +1511,10 @@
skb = dev_alloc_skb(np->rx_buf_sz);
np->rx_info[entry].skb = skb;
if (skb == NULL)
- break; /* Better luck next round. */
+ break; /* Better luck next round. */
np->rx_info[entry].mapping =
pci_map_single(np->pci_dev, skb->tail, np->rx_buf_sz, PCI_DMA_FROMDEVICE);
- skb->dev = dev; /* Mark as being used by this device. */
+ skb->dev = dev; /* Mark as being used by this device. */
np->rx_ring[entry].rxaddr =
cpu_to_le32(np->rx_info[entry].mapping | RxDescValid);
}
@@ -1572,10 +1525,10 @@
}

if (debug > 5
- || memcmp(np->pad0, np->pad0 + 1, sizeof(np->pad0) -1))
+ || memcmp(np->pad0, np->pad0 + 1, sizeof(np->pad0) -1))
printk(KERN_DEBUG " exiting netdev_rx() status of %d was %8.8x %d.\n",
- np->rx_done, desc_status,
- memcmp(np->pad0, np->pad0 + 1, sizeof(np->pad0) -1));
+ np->rx_done, desc_status,
+ memcmp(np->pad0, np->pad0 + 1, sizeof(np->pad0) -1));

/* Restart Rx engine if stopped. */
return 0;
@@ -1587,9 +1540,9 @@

if (intr_status & IntrLinkChange) {
printk(KERN_NOTICE "%s: Link changed: Autonegotiation advertising"
- " %4.4x partner %4.4x.\n", dev->name,
- mdio_read(dev, np->phys[0], 4),
- mdio_read(dev, np->phys[0], 5));
+ " %4.4x, partner %4.4x.\n", dev->name,
+ mdio_read(dev, np->phys[0], 4),
+ mdio_read(dev, np->phys[0], 5));
check_duplex(dev, 0);
}
if (intr_status & IntrStatsMax) {
@@ -1598,10 +1551,9 @@
/* Came close to underrunning the Tx FIFO, increase threshold. */
if (intr_status & IntrTxDataLow)
writel(++np->tx_threshold, dev->base_addr + TxThreshold);
- if ((intr_status &
- ~(IntrAbnormalSummary|IntrLinkChange|IntrStatsMax|IntrTxDataLow|1)) && debug)
+ if ((intr_status & ~(IntrAbnormalSummary|IntrLinkChange|IntrStatsMax|IntrTxDataLow|1)) && debug)
printk(KERN_ERR "%s: Something Wicked happened! %4.4x.\n",
- dev->name, intr_status);
+ dev->name, intr_status);
/* Hmmmmm, it's not clear how to recover from DMA faults. */
if (intr_status & IntrDMAErr)
np->stats.tx_fifo_errors++;
@@ -1619,12 +1571,13 @@
np->stats.tx_aborted_errors =
readl(ioaddr + 0x57024) + readl(ioaddr + 0x57028);
np->stats.tx_window_errors = readl(ioaddr + 0x57018);
- np->stats.collisions = readl(ioaddr + 0x57004) + readl(ioaddr + 0x57008);
+ np->stats.collisions =
+ readl(ioaddr + 0x57004) + readl(ioaddr + 0x57008);

/* The chip only need report frame silently dropped. */
- np->stats.rx_dropped += readw(ioaddr + RxDMAStatus);
+ np->stats.rx_dropped += readw(ioaddr + RxDMAStatus);
writew(0, ioaddr + RxDMAStatus);
- np->stats.rx_crc_errors = readl(ioaddr + 0x5703C);
+ np->stats.rx_crc_errors = readl(ioaddr + 0x5703C);
np->stats.rx_frame_errors = readl(ioaddr + 0x57040);
np->stats.rx_length_errors = readl(ioaddr + 0x57058);
np->stats.rx_missed_errors = readl(ioaddr + 0x5707C);
@@ -1665,19 +1618,19 @@
struct dev_mc_list *mclist;
int i;

- if (dev->flags & IFF_PROMISC) { /* Set promiscuous. */
+ if (dev->flags & IFF_PROMISC) { /* Set promiscuous. */
/* Unconditionally log net taps. */
printk(KERN_NOTICE "%s: Promiscuous mode enabled.\n", dev->name);
rx_mode = AcceptBroadcast|AcceptAllMulticast|AcceptAll|AcceptMyPhys;
} else if ((dev->mc_count > multicast_filter_limit)
- || (dev->flags & IFF_ALLMULTI)) {
+ || (dev->flags & IFF_ALLMULTI)) {
/* Too many to match, or accept all multicasts. */
rx_mode = AcceptBroadcast|AcceptAllMulticast|AcceptMyPhys;
} else if (dev->mc_count <= 15) {
/* Use the 16 element perfect filter. */
long filter_addr = ioaddr + 0x56000 + 1*16;
- for (i = 1, mclist = dev->mc_list; mclist && i <= dev->mc_count;
- i++, mclist = mclist->next) {
+ for (i = 1, mclist = dev->mc_list; mclist && i <= dev->mc_count;
+ i++, mclist = mclist->next) {
u16 *eaddrs = (u16 *)mclist->dmi_addr;
writew(cpu_to_be16(eaddrs[2]), filter_addr); filter_addr += 4;
writew(cpu_to_be16(eaddrs[1]), filter_addr); filter_addr += 4;
@@ -1696,7 +1649,7 @@

memset(mc_filter, 0, sizeof(mc_filter));
for (i = 0, mclist = dev->mc_list; mclist && i < dev->mc_count;
- i++, mclist = mclist->next) {
+ i++, mclist = mclist->next) {
set_bit(ether_crc_le(ETH_ALEN, mclist->dmi_addr) >> 23, mc_filter);
}
/* Clear the perfect filter list. */
@@ -1778,15 +1731,15 @@
np->tx_ring_dma);
for (i = 0; i < 8 /* TX_RING_SIZE is huge! */; i++)
printk(KERN_DEBUG " #%d desc. %8.8x %8.8x -> %8.8x.\n",
- i, le32_to_cpu(np->tx_ring[i].status),
- le32_to_cpu(np->tx_ring[i].first_addr),
- le32_to_cpu(np->tx_done_q[i].status));
+ i, le32_to_cpu(np->tx_ring[i].status),
+ le32_to_cpu(np->tx_ring[i].first_addr),
+ le32_to_cpu(np->tx_done_q[i].status));
printk(KERN_DEBUG " Rx ring at %8.8x -> %p:\n",
- np->rx_ring_dma, np->rx_done_q);
+ np->rx_ring_dma, np->rx_done_q);
if (np->rx_done_q)
for (i = 0; i < 8 /* RX_RING_SIZE */; i++) {
printk(KERN_DEBUG " #%d desc. %8.8x -> %8.8x\n",
- i, le32_to_cpu(np->rx_ring[i].rxaddr), le32_to_cpu(np->rx_done_q[i].status));
+ i, le32_to_cpu(np->rx_ring[i].rxaddr), le32_to_cpu(np->rx_done_q[i].status));
}
}
#endif /* __i386__ debugging only */
@@ -1805,31 +1758,17 @@
}
for (i = 0; i < TX_RING_SIZE; i++) {
struct sk_buff *skb = np->tx_info[i].skb;
-#ifdef ZEROCOPY
- int j;
-#endif /* ZEROCOPY */
- if (skb != NULL) {
- pci_unmap_single(np->pci_dev,
- np->tx_info[i].first_mapping,
- skb_first_frag_len(skb), PCI_DMA_TODEVICE);
- np->tx_info[i].first_mapping = 0;
- dev_kfree_skb(skb);
- np->tx_info[i].skb = NULL;
-#ifdef ZEROCOPY
- for (j = 0; j < 6; j++)
- if (np->tx_info[i].frag_mapping[j]) {
- pci_unmap_single(np->pci_dev,
- np->tx_info[i].frag_mapping[j],
- skb_shinfo(skb)->frags[j].size,
- PCI_DMA_TODEVICE);
- np->tx_info[i].frag_mapping[j] = 0;
- } else
- break;
-#endif /* ZEROCOPY */
- }
+ if (skb == NULL)
+ continue;
+ pci_unmap_single(np->pci_dev,
+ np->tx_info[i].first_mapping,
+ skb_first_frag_len(skb), PCI_DMA_TODEVICE);
+ np->tx_info[i].first_mapping = 0;
+ dev_kfree_skb(skb);
+ np->tx_info[i].skb = NULL;
}

- MOD_DEC_USE_COUNT;
+ COMPAT_MOD_DEC_USE_COUNT;

return 0;
}
@@ -1839,7 +1778,7 @@
{
struct net_device *dev = pci_get_drvdata(pdev);
struct netdev_private *np;
-
+
if (!dev)
BUG();

@@ -1849,7 +1788,7 @@
iounmap((char *)dev->base_addr);

release_mem_region(pci_resource_start (pdev, 0),
- pci_resource_len (pdev, 0));
+ pci_resource_len (pdev, 0));

if (np->tx_done_q)
pci_free_consistent(np->pci_dev, PAGE_SIZE,
@@ -1896,8 +1835,7 @@
* Local variables:
* compile-command: "gcc -DMODULE -Wall -Wstrict-prototypes -O6 -c starfire.c"
* simple-compile-command: "gcc -DMODULE -O6 -c starfire.c"
- * c-indent-level: 4
- * c-basic-offset: 4
- * tab-width: 4
+ * c-basic-offset: 8
+ * tab-width: 8
* End:
*/

2001-02-13 00:47:31

by Ion Badulescu

[permalink] [raw]
Subject: Re: [PATCH] new version of the starfire driver for 2.2.19pre

On Mon, 12 Feb 2001, Ion Badulescu wrote:

> Here is an incremental patch from the version in 2.2.19pre10 to the latest
> version of starfire.c. Please apply, the 2219pre10 version doesn't work if
> compiled-in (because drivers/net builds net.a not net.o). It also fixes
> the MII interface detection problem mentioned by Don Becker.
>
> The patch is longish, but it's mostly whitespace and moving code around.
> It also removes all the code that's #ifdef ZEROCOPY, since Jeff Garzik
> doesn't want it in 2.4.x and it definitely can't work in 2.2.x.

And of course I forgot to diff Space.c. Patch attached, sorry about that.

Thanks,
Ion

--
It is better to keep your mouth shut and be thought a fool,
than to open it and remove all doubt.
----------------
--- /usr/src/local/linux-2.2.18-vanilla/drivers/net/Space.c Sun Dec 10 16:49:42 2000
+++ linux-2.2.18/drivers/net/Space.c Sun Feb 11 14:53:02 2001
@@ -126,6 +126,7 @@
extern int rcpci_probe(struct device *);
extern int dmfe_probe(struct device *);
extern int sktr_probe(struct device *dev);
+extern int starfire_probe(struct device *dev);

/* Gigabit Ethernet adapters */
extern int yellowfin_probe(struct device *dev);
@@ -277,9 +278,12 @@
#ifdef CONFIG_VIA_RHINE
{via_rhine_probe, 0},
#endif
-#ifdef CONFI_NET_DM9102
+#ifdef CONFIG_NET_DM9102
{dmfe_probe, 0},
-#endif
+#endif
+#ifdef CONFIG_ADAPTEC_STARFIRE
+ {starfire_probe, 0},
+#endif
{NULL, 0},
};