Using Conan and cross-compile to EV3
Table of Contents
Overview
I'll start with a disclaimer. The goal of this project was to be able to create an environment which I could easily cross-compile c++ code and run on my Lego-Mindstorm running ev3dev (debian linux). I wanted to use conan, for two reasons , one to learn, and the other is that I like the idea of having a packagemanager for c++. I have struggled enough with trying to find and build libraries, and a package manager will make life much eassier. I am by no means a expert on this field. There might be easier and/or better way of doing things, but this is the way I found through trail-error and reading. And docker, well, I like the container idea, but again I did bare mininmum to get things to work..
Ok, so what is this? A couple of years ago I bought a lego-mindstorm kit to my kids, the reason was that when I was a kid I started using commondore and add things like lights and buttons to some interface it had in the back, and I manage to get it to blink. How I knew how to do, I have no idea, I probably read it in a computer magazine somewhere, and started fiddling with it. Anyhow , it was a great memory, and the whole thing got stuck to me while growing up. So computer became my profession, though maybe not in a straight line, but somehow I ended up doing what I though was fun then. So my simple minded thought was , since I loved fiddling with computer when I was a kid, my kids might like it too… Was I ever wrong, they looked at it and said "Nah, lets play roblox". WTF!! There I was, with my painfully expensive robot-kit which my kids dismissed. So it got stuck in a corner for some year, until one day I read about ev3dev. I added the image to a old sd-card and Voilá. It ran linux.. I did some python programs to have it do stuff, but honestly Im not a big fan of python. I wanted c++… So this article is how I did it, using cross-compiler in a not so painful fashion..Well..It could be worse. My kids…Yes, they been part of the development, well, they named it.."Gorby" thats about it.
Giving Gorby a (debian) life
Gorby needs a system, we already talked about discussed some in the
overview. I will stick to the supported one, but it is kind of
old. For example the installed glibc
is 2.24 which was released
in 2016. So quite old, to compare with my host machine which is on
version 2.32
. When it comes to c++ the std libraries are on
version 6.0.22, which means the c++ standard it supports is c++14,
which is a bit old. But maybe in another post i will upgrade the EV3
to a later kernel and libc. There is a upgrade page which I guess
you can find newer images. The docker image page also provides away
to make a image of a newer distribution. This article I will just
use the supported image. Which can be downloaded from here.
Adding image to microSD-card
The download image needs to be unzipped and locate the image file (*.img
)
Time to add it to the MicroSd-card.
sudo dd if=ev3dev-stretch-ev3-generic-2020-04-10.img of=/dev/sdf bs=4Mb
Assuming that sdf
is your inserted device name , you can determine that its the right
device by opening a terminal and run journalctl -fk
and then insert your microSD card.
.eg
$> sudo journactl -fk . . <inser SD card> .... sd 12:0:0:2: [sdf] 124735488 512-byte logical blocks: (63.9 GB/59.5 GiB)
1
When the image has been written to the card , just put it into the slot on the EV3
and boot it up. If everyting is done correctly the ev3 should boot up, and a EV3DEV
logo should be seen.
Connection to EV3
We now need to connect the EV3 so we can ssh to it. I want do a big tutorial on this subject, since the existing alredy covers most of it. I was keen to get everything up and running as fast as I could, so I used bluetooth, and it worked like a charm.
I could now connect to Gorby
and make some initial assessment.
Check the Gorby hardware
So lets first checkout what kind of network devices that Gorby has and if any ip addresses are connected to them.
ip addr | awk '/[0-9]: / {printf("%s %s\n",$2,$9)} /inet / {printf(" %s\n", $2)}'
lo: UNKNOWN 127.0.0.1/8 sit0@NONE: DOWN usb0: DOWN usb1: DOWN bnep0: UNKNOWN 10.42.0.250/24
Just for information, what cpu information can we optain.
lscpu
Architecture: armv5tejl Byte Order: Little Endian CPU(s): 1 On-line CPU(s) list: 0 Thread(s) per core: 1 Core(s) per socket: 1 Socket(s): 1 Model: 5 Model name: ARM926EJ-S rev 5 (v5l) BogoMIPS: 148.88 Flags: swp half thumb fastmult edsp java
There are some things that we can take note of here. First of all its a single core (32 bit), and we can basically only run active thread for that core. That doesn't mean we can't use more threads, it only means that just one instruction at a time, and they need to switch between the threads, the utlization can still be better since many times threads just waits, thats when other threads can do some work.
The Byte order is default (little endian
) so we actually don't need
the compiler argument -mlittle-endian
, but its good to know.
Checking the family name of the models reveals that it belongs to
the ARM9E
family which is quite old (2001). Though it enables
the direct execution of 8-bit Java bytecode in hardware.
The gcc compiler option for this processor is -march=armv5
There is no fpu in the flag field, this means that we don't have a
fpu (floating point) unit, which means if we going to use it we
need to emulate it, this is a compiler argument
(-mfloat-abi=soft
), but I don't think the docker has a floating
point library installed. So it will be set to hard.
The processor supports thumb instruction, which is used to to improve compiled code-density, as i understand it , it maps directly to the arm instructions, but as far as I understand it it creates smaller binaries with the same functionality. Though its not necessary in this case. Lets checkout the available disk size:
df -h
Filesystem | Size | Used | Avail | Use% | Mounted | on |
---|---|---|---|---|---|---|
udev | 26M | 0 | 26M | 0% | /dev | |
tmpfs | 20M | 336K | 20M | 2% | /run | |
/dev/mmcblk0p2 | 7.3G | 1015M | 5.9G | 15% | / | |
tmpfs | 29M | 0 | 29M | 0% | /dev/shm | |
tmpfs | 5.0M | 4.0K | 5.0M | 1% | /run/lock | |
tmpfs | 29M | 0 | 29M | 0% | /sys/fs/cgroup | |
/dev/mmcblk0p1 | 48M | 224K | 48M | 1% | /boot/flash | |
tmpfs | 5.7M | 0 | 5.7M | 0% | /run/user/1000 |
The 5.9 Gb available is quite enough to just don't care about the thumb instruction.
The half means that the processors, well, its supports
store and load register signed 8 bit bytes and 16
bit-halfwords.Unsigned halfword loads are zero-extended to 32
bits. I don't think we need to care about this one. (gcc -mthumb
or -marm
)
The edsp is digital signal processing unit offer high performance signal processing. We don't need to care about this either, its a feature in the processor.
Lets skip the rest, since its nothing we want to use anyway. But
one thing that might be good to know is that we should probably
provide the -mtune=arm926ej-s
to the compiler, that might do
some performace gains.
cat /proc/meminfo | awk '/(MemTotal:|MemFree:|MemAvailable:|SwapTotal:|SwapFree:)/ {printf("%s %s %s %s\n",$1,$2,$3,$2/1024)}'
Name | Val | Type | Mb |
---|---|---|---|
MemTotal: | 57620 | kB | 56.2695 |
MemFree: | 1436 | kB | 1.40234 |
MemAvailable: | 32988 | kB | 32.2148 |
SwapTotal: | 98300 | kB | 95.9961 |
SwapFree: | 94716 | kB | 92.4961 |
Here is a small explanation:
- Memtotal
- Total usable memory
- MemFree
- The amount of physical memory not used by the system
- MemAvailable
- An estimate of how much memory is available for starting new applications, without swapping.
- SwapTotal
- Total swap space available
- SwapFree
- The remaining swap space available
57Mb total memory with 32Mb for running applications is not very much, so we are on scarce resources in comparison to ie. pc. It is necessary to make sure that we don't have things running which are not in use. Though things could have been worse.
One more thing needs to be fixed. Right now it does't seem to have
a proper resolver, so running a update on the system with apt
doesn't work. Usually this is fixed by editing /etc/resolv.conf
But for some reason it didn't work. After some investigation I
found this:
systemctl cat systemd-resolved
# /lib/systemd/system/systemd-resolved.service # This file is part of systemd. # # systemd is free software; you can redistribute it and/or modify it # under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation; either version 2.1 of the License, or # (at your option) any later version. [Unit] Description=Network Name Resolution Documentation=man:systemd-resolved.service(8) Documentation=http://www.freedesktop.org/wiki/Software/systemd/resolved Documentation=http://www.freedesktop.org/wiki/Software/systemd/writing-network-configuration-managers Documentation=http://www.freedesktop.org/wiki/Software/systemd/writing-resolver-clients After=systemd-networkd.service network.target # On kdbus systems we pull in the busname explicitly, because it # carries policy that allows the daemon to acquire its name. Wants=org.freedesktop.resolve1.busname After=org.freedesktop.resolve1.busname [Service] Type=notify Restart=always RestartSec=0 ExecStart=/lib/systemd/systemd-resolved WatchdogSec=3min CapabilityBoundingSet=CAP_SETUID CAP_SETGID CAP_SETPCAP CAP_CHOWN CAP_DAC_OVERRIDE CAP_FOWNER CAP_NET_RAW CAP_NET_BIND_SERVICE PrivateTmp=yes PrivateDevices=yes ProtectSystem=full ProtectHome=yes ProtectControlGroups=yes ProtectKernelTunables=yes MemoryDenyWriteExecute=yes RestrictRealtime=yes RestrictAddressFamilies=AF_UNIX AF_NETLINK AF_INET AF_INET6 SystemCallFilter=~@clock @cpu-emulation @debug @keyring @module @mount @obsolete @raw-io [Install] WantedBy=multi-user.target # /lib/systemd/system/systemd-resolved.service.d/resolvconf.conf # tell resolvconf about resolved's builtin DNS server, so that DNS servers # picked up via networkd are respected when using resolvconf, and that software # like Chrome that does not do NSS (libnss-resolve) still gets proper DNS # resolution; do not remove the entry after stop though, as that leads to # timeouts on shutdown via the resolvconf hooks (see LP: #1648068) [Service] ExecStartPost=+/bin/sh -c '[ ! -e /run/resolvconf/enable-updates ] || echo "nameserver 127.0.0.53" | /sbin/resolvconf -a systemd-resolved' ReadWritePaths=-/run/resolvconf
Actually, its right at the end we kind of find the problem, the
nameserver is pointing towards our localhost.. it seems to look
for a file and if the file /run/resolvconf/enable-updates
exiss
the resolver provides the address to the localhost. Well, the
sbin/resolvconf
doesn't seem to exists?…So that seems abit
odd? Well lets fix so that we can get updates.
mkdir -p /etc/systemd/resolved.conf.d cat <<EOF > /etc/systemd/resolved.conf.d/dns_server.conf [Resolv] DNS=192.168.0.1 EOF systemctl daemon-reload systemd-resolved
That should do the trick. Ok, thats it.. for some system configuration for now.
Runing with python
ev3dev python interface..
As I already mentioned , I'm not a big fan of python. Its great to do some prototyping and see that everying works, but doing anything more elaborate, it quickly becomes a mess, it isn't a type safe programming language which means the compiler want validate the types during compile time. But its nice to be able to do a fast check I can control the vehicle and recieve data from the sensors.
Using tramp
I Haven't installed the ev3dev python library on my machine, the reason
is that I just wanted to test out that Gorby
reacts on my commands
and I don't actually care about making anything substantial python.
But the documentation is quite good for ev3dev python library.
Since I'm a big fan of emacs , and org-mode I can make small scripts
and execute it straight away using tramp 2. Or i could just edit a file on gorby
using emacs and tramp.
Here is an example:
from time import sleep from ev3dev2.motor import LargeMotor, OUTPUT_B,OUTPUT_C, SpeedPercent, MoveTank from ev3dev2.sensor import INPUT_1 from ev3dev2.sensor.lego import TouchSensor from ev3dev2.led import Leds from ev3dev2.sound import Sound sound = Sound() sound.speak('Hello I am a robot') tank_drive = MoveTank(OUTPUT_B, OUTPUT_C) tank_drive.on_for_rotations(SpeedPercent(75), SpeedPercent(75), 10)
here is the actual org-source code when using tramp:
#+NAME: hello-python #+HEADER: :results none :eval never-export #+begin_src python :dir "/ssh:robot@10.42.0.250:/home/robot" from time import sleep from ev3dev2.motor import LargeMotor, OUTPUT_B,OUTPUT_C, SpeedPercent, MoveTank from ev3dev2.sensor import INPUT_1 from ev3dev2.sensor.lego import TouchSensor from ev3dev2.led import Leds from ev3dev2.sound import Sound sound = Sound() sound.speak('Hello I am a robot') tank_drive = MoveTank(OUTPUT_B, OUTPUT_C) tank_drive.on_for_rotations(SpeedPercent(75), SpeedPercent(75), 10) #+end_src
This makes it much easier to work with. I'm not going to turn this
in to an emacs tutorial, this is just an alternative way to test
things on target instead of using ssh
and edit or alternatively edit on
host and use scp
.
In this regard python is great , minimal effort to make really
small test utilities. On the other hand, the EV3
is quite slow
both in regards of cpu speed and memory, that means it takes
several seconds before python script actually executes. And as the
application becomes bigger the more time and memory it will use to
execute.
Using c++
The c++ is a different beast altogheter, to be able to use the c++
we need to cross-compile. Cross compilation means that the compiler
is producing binary code which is not executable on the host
machine. For example running a x86_64
host machine the compiler
needs to produce a binary for .eg arm which is nativly on the
target. This means that when the compiler on the host machine has
produced a binary it it needs to be able to run on the target
machine. The binary then needs to be transffered to the target where
the machine set is the same. Alternativly one could install
development and build tools on target and compile there. But that is
really not such a great idea. There are (at least) two major reasons
why this is a bad idea.
- It will take up alot of disk space to install a complete development enviroment
- the
EV3
is really slow in comparison to my pc (see check hardware section). It would take ages to compile.
With those reasons, it makes sense to setup a cross-compiler toolchain on the local machine and use that to produce the ELF (arm binary) output.
Setting up a toolchain for cross compilaton can be quite cumbersome. Fortunatly there is a docker with (almost) all the necessary tools installed.
Setting up Docker
First its necessary to pull the docker image to be able to run it.
docker pull ev3dev/debian-stretch-cross
This command will essentially download the ev3dev
docker image to
the local machine. We can check that the image is in fact on our host by
docker images | awk '/ev3dev/ {printf("%s\t%s\t%s\n",$1,$2,$3)}'
Repository | Tag | Image Id |
---|---|---|
ev3dev/debian-stretch-cross | latest | 8e3b0aa8f243 |
This container is a fresh new image, what we want is to create an create a writable container based on the above.
The docker create command creates a writeable container layer over the specified image and prepares it for running the specified command
an image is basically a blueprint of what a container should look like, it contains all the necessary binaries,source codes,tools and more. It is immutable (unchangable). A container is using a image as a blueprint to create a specific container. A container can be mutable.
A Docker container is a virtualized run-time environment where users can isolate applications from the underlying system
The above qoute is exactly the kind of thing that we want when building and using the cross-compiler toolchain. We don't want to mess up the host system with arm-binaries. Another benefit is that a docker container already exists, which means we can create a container that basically works out of the box.
To create a docker, the docker create
command is used.
Here is an example:
docker create -ti -u $(id -u):$(id -g) --name GorbyBuild -v $HOME:/src -w /src ev3dev/debian-stretch-cross /bin/bash
906f0913cefe3575a4a0f1bab49dda7876d4779f993cfeafeb9edfdb7ad74973
We can now check that the container exists.
docker container ls --all | awk '/(CONTAINER|GorbyBuild)/ {print $0}'
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2751b793d5fb ev3dev/debian-stretch-cross "/bin/bash" 4 seconds ago Created GorbyBuild
Now its time to start it up. Since we gave it a name (GorbyBuild), we don't need to remeber the container id.
docker start GorbyBuild
GorbyBuild
And finally we can start working with the GorbyBuild container. I
prefer to work with the container executing /bin/bash
at the
beginning to have a more controlled enviroment. But its possible to
just run a single command inside the docker. docker exec
will
execute a command inside the docker.Here is an example:
docker exec -t GorbyBuild cat /etc/hostname
2751b793d5fb
Maybe not the most creative name for a host….If one wants to have several commands executed like a bash script, its necessary to use the shell command though this requires that the container has some kind of shell installed.
docker exec -it GorbyBuild /bin/bash -c " cd /etc/ cat os-release | grep PRETTY_NAME | sed 's/.*=//' "
"Debian GNU/Linux 9 (stretch)"
And since we do have a bash installed we might just use that shell to get inside the container.
> docker exec -t GorbyBuild /bin/bash compiler@2751b793d5fb:/src$
since its a debian distribution we can use apt to upgrade the system.
docker exec -it GorbyBuild bash -c " sudo apt update sudo apt upgrade "
Now lets see if we can compile something using the provided
arm-linux-gnueabi-gcc
from the container.
docker exec -t GorbyBuild /bin/bash -c " cat <<'EOF' > /tmp/main.c #include <stdio.h> int main() { printf(\"Hello Gorby\"); return 0; } EOF arm-linux-gnueabi-gcc -Wall /tmp/main.c -o /src/hello_gorby " file ~/hello_gorby
/home/calle/hello_gorby: ELF 32-bit LSB pie executable, ARM, EABI5 version 1 (SYSV), dynamically linked, interpreter /lib/ld-linux.so.3, for GNU/Linux 3.2.0, BuildID[sha1]=4d5671d991215dde587cffce20ee08ae2a22b9f5, not stripped
This example creates a small c-program on the dockers /tmp
directory and compiles it inside the docker, then outside it checks
the file. And since the my home directory is mounted inside the
docker the file is available from my host machine.
This concludes the docker setup.
Installing conan
Conan is a c++ packagemanager based on python. Unfortunatly python
is not installed on the ev3dev
image and there for not part of
our newly created container (GorbyBuild). That needs to be fixed. Its also required to
have pip3
install for python package management.
docker exec -t GorbyBuild /bin/bash -c " sudo apt install python3 sudo apt install python3-pip "
Now we need to install conan
using pip3
.
docker exec -t GorbyBuild /bin/bash -c " python3 -m pip install conan "
There are one more thing todo before the setup is complete. The container is all setup, but I want to use my editor and my configuration. This can be setup in the docker container but its kind of impractical. I also want to run unit tests, that is not possible in the docker or even on my host (actually it is possible to use qemu but thats outside this article). That means I need to be able to build both for x86_64 and arm and since my host (without using docker) is all setup a build environment (clang,gcc,cmake and so on) there should be no reason to not use that but that requires that i also install conan on my host.
python3 -m pip install conan
This will install conan on the user local directory. So to be able to use conan we need to add the path somehow, we do this by adding the path to the
docker exec -t GorbyBuild /bin/bash -c ' echo "PATH=\$HOME/.local/bin:\$PATH" >> $HOME/.bashrc echo "PROFILE_DIR=/src/.conan/profiles" >> $HOME/.bashrc '
Conan software package manager
We start with a small overview how conan works, I already described how to install conan.
Conan is a software package manager which is intended for C and C++ developers.
Conan is universal and portable. It works in all operating systems including Windows, Linux, OSX, FreeBSD, Solaris, and others, and it can target any platform, including desktop, server, and cross-building for embedded and bare metal devices. It integrates with other tools like Docker, MinGW, WSL, and with all build systems such as CMake, MSBuild, Makefiles, Meson, SCons. It can even integrate with any proprietary build systems.
When a package is installed it keeps the source, build and deploy structure in a repository and generates build files with different generators to include in the project. The generators could be for example cmake,make,visual studio,scons,qmake and many more, there is even a markdown generator that generates useful information on each of the requirement i.e: It is also possible to create your own package and install that into the local repository. If a generator doesn't exists its easy to create your own. This will just be short overview , I will leave to the reader to gain more information on conan documentation.
A conan recipe is an instruction how to build a package and what dependencies a packages has and is defined by a "conanfile.py" file. A conan package on the other hand is source and pre-compiled binaries for any possible platform and/or configuration. These binaries can be created and uploaded to a server with the same comands in all platforms. Installation of a package from a server is also very efficient. Only the necessary binaries for the current platforma and configuration are downloaded. Its possible to setup its own remote server, but I leave that to some other time and place.
Conan remotes
Remotes are places where conan will search for packages when argument
--remote=...
is applied. That is
remote sites which contains many (or few) conan packages/recipes. One such
remote site is conan center where all kinds of different packages
can be found. Another is Bincrafters remote repository, both of
these remote sites have interface on the web to search for specific
packages. But commony though one will use the CLI to search
for packages.
To add a new remote:
conan remote add Example https://example.com
This will add a remote repository. All repositories are added to the ~/.conan/remotes.json
file.
To list the remote added repositories:
conan remote list
conan-center: https://conan.bintray.com [Verify SSL: True] bincrafters: https://api.bintray.com/conan/bincrafters/public-conan [Verify SSL: True] Example: https://example.com [Verify SSL: True]
Its also possible to edit the json
file directly
cat ~/.conan/remotes.json
{ "remotes": [ { "verify_ssl": true, "url": "https://conan.bintray.com", "name": "conan-center" }, { "verify_ssl": true, "url": "https://api.bintray.com/conan/bincrafters/public-conan", "name": "bincrafters" }, { "verify_ssl": true, "url": "https://example.com", "name": "Example" } ] }
To remove one can either use the CLI or just edit the ~/.conan/remotes.json
by hand
conan remote remove Example
These are the most common arguments, there are many other that can be found here.
Conan Search
The search command is used to search for packages on a server or
even locally installed packages. It searches for both recipes and
binaries. Unless a --remote
is specified the local cache is search.
Lets say I want to use fmt
in my c++ project. Since I don't have it installed
on my local cache I need to search for it remotely. The search pattern
can include "*" for to obtain multiple for example:
conan search "fmt/6*" --remote=all
Existing package recipes: Remote 'bincrafters': fmt/6.0.0@bincrafters/stable Remote 'conan-center': fmt/6.0.0 fmt/6.0.0@bincrafters/stable fmt/6.1.0 fmt/6.1.1 fmt/6.1.2 fmt/6.2.0 fmt/6.2.1
In this example we used all our remotes
(see conan remotes) and
searched for the fmt package. We could just used one by using its
name in the remote field --remote=conan-center
.
I can also inspect a specific package to gain more information.
conan inspect "fmt/6.1.0" --remote=conan-center
Downloading conanmanifest.txt Downloading conanfile.py Downloading conan_export.tgz name: fmt version: 6.1.0 url: https://github.com/conan-io/conan-center-index homepage: https://github.com/fmtlib/fmt license: MIT author: None description: A safe and fast alternative to printf and IOStreams. topics: ('conan', 'fmt', 'format', 'iostream', 'printf') generators: cmake exports: None exports_sources: ['CMakeLists.txt', 'patches/**'] short_paths: False apply_env: True build_policy: None revision_mode: hash settings: ('os', 'compiler', 'build_type', 'arch') options: fPIC: [True, False] header_only: [True, False] shared: [True, False] with_fmt_alias: [True, False] default_options: fPIC: True header_only: False shared: False with_fmt_alias: False
Here we can see that there are options for using header only and building shared and which default options are used.
You can also check what different binaries exists in the repository, that is , if for example a binary is compiled for different platform or with different setting that is shown by doing.
conan search fmt/6.2.1@ --remote=all
Existing packages for recipe fmt/6.2.1: Existing recipe in remote 'conan-center': Package_ID: 03098e9ca3b3217acdb827caba0356e229ff04a1 [options] header_only: False shared: True [settings] arch: x86_64 build_type: Release compiler: clang compiler.libcxx: libc++ compiler.version: 3.9 os: Linux Outdated from recipe: True Package_ID: 0361dffd8f43a76c9aa894d20cb58fd062ae22cc [