UP | HOME

Table of Contents

1. Introduction LSP

This document will provide information on how to build and use a LSP server. But lets net get ahead of our self, lets start with what a LSP-server is and what its good for.

1.1. LSP - Language Server Protocol

Lets start by paraphrasing the official document.

The Language Server protocol is used between a tool (the client) and a language smartness provider (the server) to integrate features like auto complete, go to definition, find all references and alike into the tool.

In other words, its a separation of the parsing of code/project you are working on to a server. For example in eclipse we have a indexing which marks problems with the current code you are writing and suggesting auto-completion, reference-jumping and more. Unfortunately the indexing is quite erroneous, but who can blame them. Writing a parser is quite hard and cumbersome when it comes to c++, and it takes a lots of resources. But in the end the code needs to be compiled and therefor someone needs to be able to parse the code, the compiler. LSP is Json based protocol (where the content part is JSON-RPC) between the client (editor) and the server. The language server protocol defines a set of JSON-RPC request, response and notification messages which are exchanged between the client and the server.

1.2. What are the benefits

The nice thing with LSP server is that its not restricted to one programming language, instead depending on the mode(language) you are currently using and execution of the right LSP-server, and even if only c++ is used, it is easy to exchange the LSP server depending which one works best. And this due to the fact that LSP is open protocol which means anyone can use the specification and implement it and as long as the client supports LSP you are good to go. For example you can try out different LSP-servers for the same project, just to see which one suits your need best. And another benefit is that the community can instead of writing their own indexer and plugins for different client and languages, the parsing code can be restricted to one server which does not interfere with the client.

1.3. What other languages are there?

The best way to find out what LSP-servers and what languages the support and what features they support is to go to the LSP URL. There is lists of both servers and clients.

1.3.1. Servers

Language Maintainer Repo Code Hover Jump Work space Find Diagnostics
      completion   To symbols References  
          Definition      
c/c++ LLVM Team Clangd x x x x x x
c/c++ MaskRay ccls x x x x x x
c/c++ Jacob DuFault Cquery x x x x x x
Python Palantir Python language server x x x x x x

These are just a snippet of all that can be found at https://langserver.org/.

1.3.2. Clients

Client Maintainer Repo
Emacs Vibhav Pant lsp-mode
Eclipse Eclipse lsp4e
VSCode Microsoft VS code
sublime Tom van Ommeren Sublime
     

Again these are just a snippet of all the available clients, for a more complete list see https://langserver.org/

2. C/C++ language server

As described in the server section there are several different LSP-servers for c/c++. After some research and quite a few trail-and-error the final conclusion was that clangd seems to be missing some parts (I tried it and evaluated it, maybe I did something wrong but it was kind of lacking some crucial features) and cquery hasn't had any updates for a long time. So there it all boiled down to one single LSP-server ccls.

ccls, which originates from cquery, is a C/C++/Objective-C language server.

Compared with cquery, it makes use of C++17 features, has less third-party dependencies and slimmed-down code base.

So this is the choice which this document will focus on.

2.1. Installing Dependencies

Since ccls is dependent on clang which is part of the LLVM ,if that isn't present we need to install it. Depending on what flavor of Linux system you are using there are different ways of getting clang and friends install. Pick the Linux flavor in the sub-headings below.

2.1.1. Ubuntu installing dependencies

Ubuntu is using apt as package manager, Fortunately the LLVM guys have added a PPA(Personal package archive) to be able to get the latest clang which can be found here.

So for example if you want to add clang-8 ppa to your sources (for this exercises I'm assuming you are running xenial )

You should add the following to your /etc/apt/sources.list

lsb_release -c | awk '{ print "deb http://apt.LLVM.org/"$2"/ LLVM-toolchain-"$2"-8 main" }' >> /etc/apt/sources.list
lsb_release -c | awk '{ print "deb-src http://apt.LLVM.org/"$2"/ LLVM-toolchain-"$2"-8 main" }' >> /etc/apt/sources.list

This will add the URL to the sources list, of course if you want to run some other version, just exchange the 8 to whatever you want to use.

We need to add the package signature to make sure we are using the the right PPA and add it as trusted. This is done by this command.

wget -O - https://apt.LLVM.org/LLVM-snapshot.gpg.key | sudo apt-key add -

Make sure the fingerprint is the same.

finally install the clang and friends.

apt update

apt-get install clang-8 \
clang-tools-8 \
clang-8-doc \
libclang-common-8-dev \
libclang-8-dev \
libclang1-8 \
clang-format-8 \
python-clang-8

2.2. ccls build

Firstly the ccls needs to be cloned locally using git.

git clone --depth=1 --recursive https://github.com/MaskRay/ccls
cd ccls

There is a dependency to rapid/Json but that is solved through the recursive git clone call.

2.2.1. Building ccls

Assuming you have cmake installed. Lets start with creating an out-of-build directory "build" and change directory to that.

mkdir build
cd build

Now we initiate the build.

cmake -DCMAKE_BUILD_TYPE=Release ..

In my build I get the following error:

fatal: No names found, cannot describe anything.

This seems to have something to do with git and its safe to ignore, in other words proceed to make

make -j8

If you want to install the server just type sudo make install

3. configuration

The ccls project setup is dependent on compile_commands.json , there are various ways of getting creating a compile_commands.json. The most obvious is to use cmake, but if that is not possible one can use other ways e.g bear.

The compile_command.json file has all the information on how to build each file together with compiler,flags,defines and includes. This file needs to be linked to the root of your project directory.

Here is an example using ccls-build directory that we created before and linking the compile_commands.json to the project root.

ccls/build> cd ..
ccls> ln -s build/compile_commands.json ..
ccls> ls -la
.
.
lrwxrwxrwx  1 ocp ocp    27 Aug 21 16:10 compile_commands.json -> build/compile_commands.json
.
.

3.1. Creating compile json

3.1.1. cmake

Cmake has a variable that can be set through the command line -DCMAKE_EXPORT_COMPILE_COMMADS=1 here is an example running cmake.

cmake -DCMAKE_EXPORT_COMPILE_COMMANDS=1 ..

it can also be set statically inside the CMakeLists.txt file.

set(CMAKE_EXPORT_COMPILE_COMMANDS 1)

cmake export compile commands

3.1.2. bear

Bear is a tool to generate compilation database, some project might not have the benefit of running cmake, for these projects bear command will intercept the exec call by the build tool.

Here is an example using bear

bear make -j8

3.2. ccls command line

The ccls is a daemon waiting for the client send requests. There are some command line options that can be used to customize the behavior of ccls for a complete list checkout out this link.

Command line option help
-h - Alias for -help
-index=<root> - standalone mode: index a project and exit
-init=<string> - extra initialization options in JSON
-log-file-append - append to log file
-test-index=<string> - run index tests

When ccls is ran it will create a cache in some directory (default .ccls-cache). This cache can be fairly big, but that's whats make ccls fast at looking up things. Fortunately ccls is able to run in a stand-alone mode. Here is an example how that is done using its own source.

ccls>time ./build/ccls -index=./
.
.
.
./build/ccls -index=./  119.30s user 39.03s system 408% cpu 38.750 total
ccls>du -sh .ccls-cache
42M     .ccls-cache/

ccls>du --exclude=build --exclude=.git -sh  ./
12M     ./

To summarize the output.

  size Time Cache
    Creation* Size
ccls source 12M ~2min 42M

Here we can see a ratio \(42 / 12 = 3.5\) but my experience is that this changes when using more third parties and such, it even changes during run time, so by know means is this a ratio to be trusted.

As we noted from the command-line option's above there is an init option that can be passed to the executable.

3.2.1. Init string

The init string can be used to initialize ccls with different options. For a complete list see the link. The init string needs to be a Json string. Here we have the opportunity to add/exclude compiler flags, set another cache directory, set the amount of indexing threads and many other things. This is especially useful if we are in a cross-compiling environment. But let not get ahead of our self's. Lets first create use the information above to create a new cache in e.g tmp, and create it in Json format instead.

lets first create the Json file which we will be using. Lets call it init.json , usually ccls is not run from the command line and the arguments should be provided through the client. But for this exercises we will just run the ccls stand-alone to make sure we understand what happens.

{
    "cache": {
        "directory": "/tmp/ccls-cache",
        "format": "json"
    },
    "index": {"blacklist":
              [
                  "^/usr/(local/)?include/c\\+\\+/[0-9\\.]+/(bits|tr1|tr2|profile|ext|debug)/",
                  "^/usr/(local/)?include/c\\+\\+/*",
                  "^/usr/include/LLVM*",
                  "^/usr/lib/LLVM-[0-9]+/*"
              ]
             },
    "compilationDatabaseDirectory": "build"
}

The above init will set:

  Consequence Value
Directory Cache will be /tmp/ccls-cache
  written to that directory  
format The cache format cache with json
blacklist blacklist will not be indexed /usr/include…
  (regular expresion)  

Now lets run the indexer with the above init-string

./build/ccls -index=./ --init="`cat init.json`"

We will now end up with a cache in /tmp/ccls-cache with additional .json files that can be inspected. e.g using jq

jq . < /tmp/ccls-cache/@home@ocp@tmp@ccls/src@main.cc.json

All the init string and its meaning can be found at this link.

3.2.2. .ccls file

ccls is a line-based text file at the project root. Its main function is to specify compiler flags needed to properly index your code. This file is especially important when cross compiling. Which will be explained in section cross-compilation.

In this file we can specify for example that all files ending with .h should be considered c++ file. E.g

%h -x
%h c++-header

%compile_command.json% - directive

By default .ccls compiler flags are applied only to files not listed in compile_commands.json. If this directive appears first in .ccls then after compile_commands.json is parsed, the rest of the .ccls arguments will be appended to the compiler flags for files found in compile_commands.json.

Its also possible to add for example target directives for cross-compiling. More on that later.

4. Running ccls

Most of the time ccls is ran as a daemon , and should be executed by the client program, e.g Emacs, vim, VS code or what ever.

5. cross-compilation with ccls

So far we have used the ccls running the same target as the host machine we are running. But if we want to cross-compile the issue becomes somewhat more complicated. I will try to show how I got it to work using a yocto SDK. To set the build environment to use the yocto, we need to source an environment script provided by the SDK, this script will set various environment variables for example which compiler to use. Most of the time this is to some gcc compiler So by doing

>echo $CXX
arm-poky-linux-gnueabi-g++  -march=armv7ve -mthumb -mfpu=neon -mfloat-abi=hard -mcpu=cortex-a7 --sysroot=/opt/ocp/11.0.4/sysroots/cortexa7t2hf-neon-poky-linux-gnueabi
>echo $CXXFLAGS
-O2 -pipe -g -feliminate-unused-debug-types

The first issue is to identify flags which are not compatible between clang and gcc .

For that we need to first build the project and create a compile_commands.json. There are of course other ways of checking what flags are used, but I found this to be quite useful.

>less compile_command.json
{
  "directory": "/home/prj/build/src/adam",
  "command": "/opt/sdk/sysroots/x86_64-ocpsdk-linux/usr/bin/arm-poky-linux-gnueabi/arm-poky-linux-gnueabi-g++   -march=armv7ve -mthumb -mfpu=neon -mfloat-abi=hard -mcpu=cortex-a7 --s
ysroot=/opt/sdk/sysroots/cortexa7t2hf-neon-poky-linux-gnueabi --sysroot=/opt/sdk/sysroots/cortexa7t2hf-neon-poky-linux-gnueabi  -DBOOST_ENABLE_ASSERT_HANDLER -I/home/ocp/src/c
pp1/eden/include -I/home/prj/src/adam/include -I/home/prj/src/serpent/include -I/home/ocp/user_sysroots/11.0.4/usr/include -I/home/prj/gen_files -I
/home/prj/src/adam/../include   -O2 -pipe -g -feliminate-unused-debug-types  -Wno-psabi -Wno-maybe-uninitialized -DBOOST_ASIO_ENABLE_SEQUENTIAL_STRAND_ALLOCATION   -Wall -Wext
ra -pedantic -funsigned-char -std=gnu++17 -o CMakeFiles/adam_app.dir/src/main.cpp.o -c /home/prj/src/adam/src/main.cpp",
  "file": "/home/prj/src/adam/src/main.cpp"

We can note that there is a problem between gcc and clang when it comes to specify the target. Gcc is using -march (located here) while clang is using something called Target Triple. Here is the general format

-target <arch><sub>-<vendor>-<sys>-<abi>

Where

arch
x8664, i386, arm, thumb, mips, etc.
sub
for ex. on ARM: v5, v6m, v7a, v7m, etc.
vendor
pc, apple, nvidia, ibm, etc.
sys
none, linux, win32, darwin, cuda, etc.
abi
eabi, gnu, android, macho, elf, etc

So In the case above we convert this to

-target armv7e-unknown-linux-elf

When a parameter is not important, it can be omitted, or you can choose unknown and the defaults will be used, as in this example the vendor is unimportant and is therefor set to unknown. The rest of the line above are compatible.

5.1. Finding incompatible flags

There might be other flags that is not compatible. One way to figure out flags that are incompatible between gcc and clang++ is to grab a compile-line from compile_commands.json and substitute the compiler command with clang++ , so in the case above we remove the arm-poky-linux-gnueabi-g++ and exchange it to clang++ as the example below shows.

Here is an example

clang++   -march=armv7ve -mthumb -mfpu=neon -mfloat-abi=hard -mcpu=cortex-a7 --sysroot=/opt/sdk/sysroots/cortexa7t2hf-neon-poky-linux-gnueabi  -DBOOST_ENABLE_ASSERT_HANDLER -I/home/ocp/src/cpp1/eden/include -I/home/ocp/src/cpp1/eden/src/adam/include -I/home/ocp/src/cpp1/eden/src/serpent/include -I/home/ocp/user_sysroots/11.0.4/usr/include -I/home/ocp/src/cpp1/eden/gen_files -I/home/ocp/src/cpp1/eden/src/adam/../include   -O2 -pipe -g -feliminate-unused-debug-types  -Wno-psabi -Wno-maybe-uninitialized -DBOOST_ASIO_ENABLE_SEQUENTIAL_STRAND_ALLOCATION   -Wall -Wextra -pedantic -funsigned-char -std=gnu++17 -o CMakeFiles/adam_app.dir/src/main.cpp.o -c /home/ocp/src/cpp1/eden/src/adam/src/main.cpp
clang: warning: argument unused during compilation: '-mthumb' [-Wunused-command-line-argument]
clang: warning: argument unused during compilation: '-mfpu=neon' [-Wunused-command-line-argument]
clang: warning: argument unused during compilation: '-mfloat-abi=hard' [-Wunused-command-line-argument]
clang: warning: argument unused during compilation: '-mcpu=cortex-a7' [-Wunused-command-line-argument]
warning: unknown warning option '-Wno-psabi' [-Wunknown-warning-option] (unknown)
warning: unknown warning option '-Wno-maybe-uninitialized'; did you mean '-Wno-uninitialized'? [-Wunknown-warning-option]
error: unknown target CPU 'armv7ve'
.
.
.

As discussed before the target is an error, that we knew already. The other warnings are not used, which they should be if the target was right, but the the warnings with unknown warning option should race a flag that these are not compatible.

So we identified two new problems.

-Wno-psabi
which should be excluded, since there is no equivalent for clang++
-Wno-maybe-uninitialized
Which has a another naming , so this should probably be excluded, and then later on include as -Wno-uninitialized instead.

We have now identified the non-compatible flags , its time to exclude them.

5.2. Exclude flags

As mentioned before the init json-string that is provided to ccls has an exclude field. As we have located at least three flags which needs to be excluded lets start writing our init Json-string.

{
    "clang": {
        "excludeArgs": ["-march=armv7ve","-Wno-psabi", "-Wno-maybe-uninitialized"]
    }
}

These flags should be added to the client initialization of ccls, for example emacs or VS code, since they will always be removed. How the initilization string is included for each client can be read here and is outside the scope of this document.

5.3. Projects setup (.ccls)

.ccls is a line-based text file at the project root. Its main function is to specify compiler flags needed to properly index your code: -I -D etc. Each line consists of one argument to be added to the compiler command line. No white space splitting is performed on the argument, thus -I foo cannot be used (use -Ifoo or -I\nfoo for example)

This information can be found here.

5.3.1. compile commands

The first thing we want is to include the compile_commands.json this is done through

%compile_commands.json

This means that for each of the compile commands listed, the flags that we include in this file (.ccls) will be added after the command found in the json file. So for example if we add another -I/tmp/myIncludes. This will be added to the clang++ together with whatever is located in the compile_commands.json line.

So we need this flag.

5.3.2. Getting system includes

The next part is trying to get the same system paths as gcc so we will use the same stdlibc++ and such. To retain those flag we need to check what system flags g++ is using. This can be done with the following command.

eval $CXX -v -std=c++17 -xc++ -fsyntax-only /dev/null
.
.
.

This will spit out a whole bunch of things. We are just interested in what is between #include <...> search starts here and End of search list

#include "..." search starts here:
#include <...> search starts here:

.
/opt/sdk/sysroots/cortexa7t2hf-neon-poky-linux-gnueabi/usr/include/
End of search list.

Now for each of the line in between these keywords start and end , copy it and add it to the .ccls file in the following style

-isystem
...
-isystem
...
-isystem
/opt/sdk/sysroots/cortexa7t2hf-neon-poky-linux-gnueabi/usr/include/

The -isystem directives tells clang++ to check that path for system includes.

5.3.3. adding flags

As mentioned before we had one flag that had an equivalent which we excluded (see Flags), now its time to add it again, but with clang++ manner.

-Wno-uninitialized

5.3.4. adding target and other

Lastly we will add the (architecture) target for which ccls/clang will be using to match g++ This was describe in target example, so all we need to do is add it to the .ccls file

-target
armv7e-unknown-linux-elf

It necessary to put the line exactly as it states in the example.

5.3.5. final .ccls file

Here is an example of a .ccls file.

%compile_commands.json
%h -x
%h c++-header
-isystem
/opt/sdk/sysroots/cortexa7t2hf-neon-poky-linux-gnueabi/usr/include/c++/8.2.0
-isystem
/opt/sdk/sysroots/cortexa7t2hf-neon-poky-linux-gnueabi/usr/include/c++/8.2.0/arm-poky-linux-gnueabi
-isystem
/opt/sdk/sysroots/cortexa7t2hf-neon-poky-linux-gnueabi/usr/include/c++/8.2.0/backward
-isystem
/opt/sdk/sysroots/x86_64-ocpsdk-linux/usr/lib/arm-poky-linux-gnueabi/gcc/arm-poky-linux-gnueabi/8.2.0/include
-isystem
/opt/sdk/sysroots/cortexa7t2hf-neon-poky-linux-gnueabi/usr/lib/gcc/arm-poky-linux-gnueabi/8.2.0/include
-isystem
/opt/sdk/sysroots/x86_64-ocpsdk-linux/usr/lib/arm-poky-linux-gnueabi/gcc/arm-poky-linux-gnueabi/8.2.0/include-fixed
-isystem
/opt/sdk/sysroots/cortexa7t2hf-neon-poky-linux-gnueabi/usr/include/
-target
armv7e-unknown-linux-elf

Together with the init json-string

{
    "clang": {
        "excludeArgs": ["-march=armv7ve","-Wno-psabi", "-Wno-maybe-uninitialized"]
    },
    "index": {
        "initialBlacklist":
        [
            "^/usr/(local/)?include/c\\+\\+/[0-9\\.]+/(bits|tr1|tr2|profile|ext|debug)/",
            "^/usr/(local/)?include/c\\+\\+/*",
            "^/usr/include/LLVM*",
            "^/usr/lib/LLVM-[0-9]+/*"
        ]
    }
}

That concludes how to setup ccls for x-compilation environment.

Author: Carl Olsen

Created: 2022-06-11 Sat 19:33