Embedded Linux Conference Europe 2023: Our Recommendations

Our contributions

You can find our contributions in this blog post.

Last month Pengutronix was present at the Embedded Open Source Summit (EOSS) in Prague. Thanks to all to all speakers for sharing your knowledge! In this blog post we want to shine a spotlight at a few talks that we found especially interesting. (Links to recordings will be added once the recordings are available.)

Chris' Suggestion: Testing and Remote Access to Embedded System DPI/LVDS Display Output (Marek Vasut)

Schedule, Recording

In his talk Marek presented a way to capture the raw data of parallel display outputs and visualize the result. This allows easy debugging of the resulting output at the end of the complete chain.

The cool thing about his method is, that he does not need expensive or specialized hardware to do so. This solution incorporates the USB 3 version of a well-known parallel-to-USB buffered interface, that has been used with Sigrok as a logic analyzer for quite some time. He is using the Cypress FX3 USB controller together with the already existing fx3lafw Open Source firmware.

Marek used the FX3 dev-kit as a platform and added a interface PCB to his device under test (DUT). The interface mechanically adapts the connector on the DUT to the connector on the dev-kit and has no active parts. After some firmware modifications (fx3lafw samples free-running, but the parallel display signal has a synchronous clock signal) he was able to capture the raw data stream coming from the DUT. This also includes blanking areas, H-sync and V-sync signals.

In the end Marek was able to capture frames and visualize them.

A capture device for parallel video output was missing our toolkit for development and automated testing. It's great to see that all the building blocks are already there. I will try to build these setup myself once I am back at the office.

Roland's Suggestion: Accelerated Mainline Linux Development Ahead of SoC Availability (Bryan Brattlof & Praneeth Bajjuri)

Schedule, Recording

Although their colleague Praneeth Bajjuri could not make it to the conference, Bryan Brattlof of Texas Instruments gave a good overview how the design and driver development process works for new SoCs.

They showed that the development process of a modern SoC is not at all straight-forward, and instead software and hardware development are heavily intertwined, which results in fast development cycles. With the different steps in the design process from the initial idea over hardware design and validation, to tapeout at the manufacturer, they are able to react to feedback by internal as well as external community members like business people, system developers and architects, and experts for the different Linux kernel subsystems. This way the SoC vendor can address past pain points and get a feeling whether a new SoC will be maintainable in mainline Linux at all.

Of course, the advantage of having kernel developers in-house makes it possible for TI to work with internal pre-release versions of new SoC designs, and develop preliminary driver support for new hardware units and the surrounding hardware ecosystem (e.g. PMICs) while the design is still being validated. Until hardware availability, the new systems are emulated in software based on Verilog and SystemC models, which is fairly slow (as Bryan put it, "You find other things to do after you hit that Run button"), but still, this approach allows them to find errors quite fast, and the turnaround for a bug fix can be measured in days or even hours rather than weeks.

In the Q&A, I was positively surprised that TI seems to have moved to a "upstream first" approach. When asked about how they persuaded the management, Bryan emphasised that they realised that the "downstream BSP" approach was not feasible for long-term platform support. They learned from that past mistake and decided to work with upstream directly instead, which gives them the advantage of feedback, support, and maintenance of the wider Kernel community. At last they want happy customers after all… :-)

Johannes' Suggestion: MUSE: MTD in Userspace, or How to Extend FUSE for Fun and Profit (Richard Weinberger)

Schedule, Recording In his position as MTD subsystem maintainer Richard not only spends a significant amount of time improving NAND drivers and filesystems, but also investigating NAND images.

NAND flash is prone to bits flipping now and then, which requires error correction to be implemented in NAND filesystems. Debugging with real-world NAND dumps can be quite painful, since looking at raw dumps in a hex editor is still somewhat state-of-the art.

In his talk Richard pointed out that the existing in-kernel NAND-flash simulators mtdram, block2mtd and nandsim besides qemu virt have been a good pathway for implementing and testing especially error correction code and for bringing the MTD framework in good shape, but they often do not simulate flash wear and are not very flexible with respect to fault injection and fuzzing.

When a colleague of Richard's mentioned cuse (character device in userspace) Richard was immediately hooked and thought about if cuse or the closely related fuse (filesystem in userspace) could be used for writing a new and more in-depth MTD simulator, for which he started writing MUSE (MTD in userspace). Richard soon found that he only needed to add some minor additions with under 1k LOC in the kernel to the already existing fuse implementation for simulating bad block handling and could quite easily plug MTD filesystems on top of a fuse MTD simulator.

He now aims to have full record/replay options for using realworld NAND dumps, recording single steps of error correction for better analysis, fuzzing, fault injection and supporting multiple formats of raw NAND dumps.

Richard still wants to experiment with his implementation, but announced that he will upstream his work once he figured out a stable set of required operations for the FUSE device in order to support the MTD simulation. He presented the internal structures and how FUSE works, as it uses a rather unique setup where the kernel is a client to the userspace server part with the kernel making requests which are answered by the userspace. At the Moment Richard is experimenting with using the libfuse low-level API in rust, while his original implementation has been written in C.

At the end of his talk, he also pointed out that this could possibly be (mis)used for userspace MTD drivers, though this is strongly discouraged, not only will this approach exhibit a performance penalty but also have some security implications is calling for lots of other havoc.

I really enjoyed this talk, as it shows a creative use of already existing infrastructure for solving real-world problems, I am curious if we will see similar FUSE-based approaches for tackling issues in other fields in our customer projects soon.

Michael's Suggestion: How Much Is Tracing? Measuring the Overhead Caused by the Tracing Infrastructure (Andreas Klinger)

Schedule, Recording

Andreas reports on his investigation of the performance overhead of different tracing mechanisms. He uses a test setup consisting of a BeagleBone Black with custom Linux driver to toggle a GPIO and an oscilloscope to measure the pulse width of the GPIO. Enabling and configuring different tracing mechanisms in this test setup and observing the effects on the GPIO pulse width, allows to draw conclusions about the overhead of the different mechanisms.

While developing actual products, we already experienced that the tracing mechanisms, even though they are highly optimized and should have very little overhead if they are enabled at compile time, but disabled at runtime, don't come for free. Seeing the confirmation and actual measurements of this performance overhead is very insightful, as tracing is a very valuable debugging tool and one may prefer to keep it enabled in production systems.

Steven Rostedt, who is the maintainer of the tracing infrastructure, showed a lot of interest in the measurement results. This further underlines the relevance of Andreas' work. Hopefully, this work may result in further optimization of the tracing infrastructure or at least in some guidelines which tracing options may be kept enabled at what cost in production systems.

ELCE Booth Crawl

During the Onsite Attendee Reception the Embedded Linux (Micro-) Conference (ELCE) gives projects and companies the chance to present their projects with a tabletop demo. The booth crawl is always another good opportunity to get in touch with the community.

Penugtronix used this chance to present two of our tabletop demos:

Our Open Source FPGA Demo shows that with an Open Source toolchain, RISC-V soft-core, build system and some IP-cores it is possible to build complete systems.

Our Labgrid Demo showcases how easy it is to run tests on real hardware using labgrid and the Linux Automation Test Automation Controller.

Labgrid is a Python-based Open Source board control library. It allows you to interactively control embedded devices on your desk - or at a remote location. It also has a strong focus on automation: You can either use it as a library in your scripts or as a pytest plugin. This way, you can run tests on real hardware that feel like pure software testing.

The Test Automation Controller (TAC) (designed by our spin-off Linux Automation) connects the physical world to labgrid. It provides interfaces such as an UART, a power switch with voltage and current measurement, USB host and device ports and general purpose I/Os.

The test controller can be powered using Power over Ethernet (PoE) and has an integrated Ethernet switch. This means, that in most cases, you only have two connections to your test setup: power for your device under test and Power over Ethernet for the Test Automation Controller. The device runs Linux based on Yocto. The Yocto layer is already available under MIT license. The device will most likely be available in autumn 2023.

In this demonstration we run a few tests on the device under test (DUT), in this case, a Beagle Bone with a separate motor board. At the beginning of a test-run, labgrid switches the device off and provisions the newest software image on the SD-Card using the USB-SD-Mux. Afterwards, the DUT is powered on and labgrid interrupts the boot process to run a first test inside the bootloader. Then, the boot process is resumed and labgrid takes control over the Linux shell.

In the shell, the main functionality of this device is tested: We want to spin the disk at a given speed. By slowing the disk down manually, we can let the tests fail intentionally. In a real-world scenario these results would now be collected by the test runner and some engineer would have to investigate what has happened. But in our demonstrator the process simply starts over again.


Weiterführende Links

Embedded Linux Conference Europe 2023: Our Contributions

This year the Embedded Linux Conference Europe (ELCE) is back in Prague! Pengutronix, again, is on a field trip with 15 colleges to attend the conference. The ELCE is one of the big conferences where the Embedded Linux Community meets during the year. This time the ELCE is part of the Embedded Open Source Summit (EOSS): a new conference with only embedded topics and without cloud- or crypto-tracks.


Chemnitzer Linux-Tage 2024

Pengutronix war auch in diesem Jahr wieder auf den Chemnitzer Linux Tagen dabei. Wie jedes Jahr sind die CLT eine willkommene Gelegenheit Freunde zu treffen und sich über Linux, Open Source und den Rest der Welt auszutauschen.


FrOSCon 2023

In ein paar Stunden beginnt die 18. FrOSCon an der Hochschule Bonn-Rhein-Sieg. Pengutronix ist wieder mit einem kleinen Team vor Ort. An einem der Partner-Stände zeigen wir wieder einige unserer Aktivitäten in der Open Source Community. Dafür bringen wir unseren labgrid Demonstrator und die FPGA Demo mit.


DjangoCon Europe 2023

Django ist Pengutronix' Framework der Wahl für Software zur Abwicklung unserer Geschäftsprozesse. Diese internen Werkzeuge bieten zudem auch immer die Gelegenheit neuere Entwicklungen im Django-Universum auszuprobieren.


rsc's Diary: ELC-E 2022 - Tag 4

Freitag war der letzte Tag der ELC-E 2022 und somit auch der Tag des traditionellen ELC-E Closing Games. Tim Bird berichtete gewohnt kurzweilig über den aktuellen Stand der Embedded Linux World (Universe?) Domination. Und natürlich gab es auch am letzten Tag einige interessante Vorträge.


rsc's Diary: ELC-E 2022 - Tag 3

Das Convention Centre liegt direkt am Liffey, nur wenige Minuten Fußweg von der O'Connell Bridge, Temple Bar und dem Trinity College entfernt. Ein Besuch auf der ELC-E ist immer auch eine gute Gelegenheit, interessante Städte in Europa kennenzulernen. Und hier ist auch schon mein Bericht der Talks, die ich am Tag 3 gehört habe.


rsc's Diary: ELC-E 2022 - Tag 2

Das Dublin Convention Centre ist riesig - es gibt mehr als genug Platz für die vielen Teilnehmer des Open Source Summit. Zum Glück wird es die Vorträge nach der Konferenz auf YouTube geben, so dass es nicht schlimm ist, wenn man vor Ort nicht alle Vorträge hören kann. Hier ist mein Bericht zu den Vorträgen, die ich am zweiten Konferenztag gehört habe.