Running Linux on a PCI Add-in Card: Hardware

Every so often I see someone attempting to run the Linux kernel on a PCI add-in card. I’ve done this myself, but there are a lot of complications. This article covers the hardware, and a second article will cover software. Don’t take this as chipset selection advice: before you commit to hardware double-check both the errata and the availability of the silicon.

The first problem to face is the terms used for PCI, which lead to cluttered search engine results. According to the PCI specification, whether a device is a PCI master or target is independent of whether the device is a host bridge. A PCI master is any device capable of initiating PCI cycles, and a PCI target is any device that can respond to PCI cycles. A host bridge must implement both a PCI master and target in order to be useful. A general PCI device is required to respond to PCI configuration cycles, but a host bridge is not required to respond. The PCI Specification uses the term add-in for PCI cards, but does not define a term for non-host device silicon. Some device datasheets use the term PCI slave to refer to the PCI target functionality, whereas others use PCI slave to indicate that the device is not a host bridge. Finally, the term PCI Option, while not in the PCI specification, can be found referring to an add-in or to any device that is not a host bridge.

The hardware requires a way to connect a processor running Linux to the PCI bus using a bridge that can respond to configuration cycles from the host bridge. The host bridge will use the BAR registers on the device to find the size of each memory and I/O window. This will set the size of the memory window on the device that the host can address, which will normally be less than the entire memory on the device. Some devices use a small ROM, which the bridge chip loads at reset, and others leave the job to software running on the device. If the device software sets the BAR sizes, the hardware will need to hold off configuration cycles from the host until the device software is ready.

The easiest hardware for the job is a non-transparent PCI-to-PCI bridge (Intel, Tundra, PLX). This allows a standard processor with standard host bridge to work on an add-in card, and leaves the non-transparent bridge to separate the two PCI domains. The advantages of this are few modifications to the Linux kernel on the add-in card and the ability to place extra PCI devices on the add-in card, invisible to the system host. The disadvantage is the extra silicon required.

The second method is to pick a host bridge for the processor that can respond to configuration cycles and run as a non-host device. These are commonly available for PowerPC (Tundra, Tundra, Marvell, PLX), and integrated into ARM devices (Intel). The advantages here are reduced chip count, and the disadvantage is the greater programming effort. The Linux kernel is unlikely to boot on this class of hardware without modifications, before getting to the real work of communication with the host.

Next: Software issues for PCI add-in cards.