This year's iteration of the Netdev, the technical conference on Linux networking, was held at the Hotel Grandium, Prague in mid March. Pengutronix was asked to attend the IoT Workshop, which we did with two developers.
For the usual hacker it was a bit too early in the morning at 8:30 when Stefan Schmidt opened this year's IoT workshop.
After Stefan's welcoming words he gave an overview of the session followed by all attendees introducing themselves.
CAN and IoT
Although the CAN protocol has been around for a while, with a bit of abstraction, CAN is used in a similar fashion as modern IoT communication hardware. Linux based devices that include one or more CAN buses are usually some kind of embedded hardware. CAN is used in many cases to communicate with actuators and sensors that are less powerful and don't run Linux. In others, a CAN bus may include several Linux based electronic control units (ECUs).
CAN Basics and Discussion
My co-worker Oleksij Rempel held a presentation about the CAN networking stack on Linux in general, as well as the ongoing activities of mainlining an SAE J1939 compatible stack. This talk laid the foundations for the following discussion, that was focused on the challenges of the CAN networking stack:
- lots of small packets (only 8 bytes of payload)
- J1939 messages up to 112 MiB
- packet scheduling, fq_codel
Packet Scheduling and fq_codel
Almost a year ago an issue popped up on the systemd issue tracker. The problem is that some distributions set the default packet scheduler to fq_codel, either via the kernel configuration or systemd's systemd-sysctl service.
The fq_codel packet scheduler works great for TCP/IP but causes lots of silently dropped CAN frames, which renders CAN unusable on these setups.
After the IoT workshop I talked to several people about the problem which helped me to understand the problem better. But at the end of day one Dave Taht, co-founder of the Bufferbloat project, got in touch with me.
After an insightful meeting with Dave on the next day, several hours of browsing kernel code, hacking and testing later, I posted the following RFC patch series to solve the problem:
There is networking hardware that isn't based on Ethernet for layers 1 and 2.
For example CAN.
CAN is a multi-master serial bus standard for connecting Electronic Control Units [ECUs] also known as nodes. A frame on the CAN bus carries up to 8 bytes of payload. Frame corruption is detected by a CRC. However frame loss due to corruption is possible, but a quite unusual phenomenon.
While fq_codel works great for TCP/IP, it doesn't for CAN. There are a lot of legacy protocols on top of CAN, which are not build with flow control or high CAN frame drop rates in mind.
When using fq_codel, as soon as the queue reaches a certain delay based length, skbs from the head of the queue are silently dropped. Silently meaning that the user space using a send() or similar syscall doesn't get an error. However TCP's flow control algorithm will detect dropped packets and adjust the bandwidth accordingly.
When using fq_codel and sending raw frames over CAN, which is the common use case, the user space thinks the packets have been sent without problems, because send() returned without an error. pfifo_fast will drop skbs, if the queue length exceeds the maximum. But with this scheduler the skbs at the tail are dropped, an error (-ENOBUFS) is propagated to user space. So that the user space can slow down the packet generation.
On distributions, where fq_codel is made default via CONFIG_DEFAULT_NET_SCH during compile time, or set default during runtime with sysctl net.core.default_qdisc (see ), we get a bad user experience. In my test case with pfifo_fast, I can transfer thousands of million CAN frames without a frame drop. On the other hand with fq_codel there is more then one lost CAN frame per thousand frames.
As pointed out fq_codel is not suited for CAN hardware, so this patch introduces a new netdev_priv_flag called "IFF_FIFO_QUEUE" (in contrast to the existing "IFF_NO_QUEUE").
During transition of a netdev from down to up state the default queuing discipline is attached by attach_default_qdiscs() with the help of attach_one_default_qdisc(). This patch modifies attach_one_default_qdisc() to attach the pfifo_fast (pfifo_fast_ops) if the "IFF_FIFO_QUEUE" flag is set.
And for those who wonder, the gentleman playing guitar at netdev's social event, that's Dave.