A Trustworthy, Free (Libre), Linux Capable,
Self-Hosting 64bit RISC-V Computer

Gabriel L. Somlo <somlo at cmu dot edu>, 2019-07-04 ⎯ 2020-05-08

NEW: latest build process at https://github.com/litex-hub/linux-on-litex-rocket;

Or skip directly to the self-hosting demo.

1. Preamble

My goal is to build a Free/OpenSource computer from the ground up, so I may completely trust that the entire hardware+software system's behavior is 100% attributable to its fully available HDL (Hardware Description Language) and Software sources.
More importantly, I need all the compilers and associated toolchains involved in building the overall system (from HDL and Software sources) to be Free/OpenSource, and to be themselves buildable and runnable on the computer system being described. In other words, I need a
self-hosting Free/OpenSource hardware+software stack!

I don't own or otherwise control a silicon foundry, and therefore can't fabricate my own ASICs, so I will build the "hardware" component of this computer on an FPGA, ensuring that any programming of (and bitstream generation for) the FPGA happens with Free/OpenSource tools. I consider the tradeoff to be worthwhile and advantageous from a trustworthiness standpoint: The following is a list of links to additional resources, documents and early experiments related to building the system described above:

2. Build Instructions for LiteX+RocketChip 64bit SoC

2.1. Prerequisites and Ingredients

This is where we build a complete, Linux-capable 64-bit computer all the way from HDL and Software sources. Here's the main list of ingredients: I maintain forks of some of these projects at https://github.com/gsomlo, with patches in various stages of upstream-readiness. I will use links to these forks in the instructions to follow, with the understanding that they will be rebased as upstreaming work progresses. The end goal is to eventually delete the forks altogether once all changes they contain are merged upstream.

2.2. Setup

2.2.1. Development Environment
On a recent Fedora x86_64 workstation (e.g., F30), install the following packages:
    dnf install \
        dtc fakeroot perl-bignum \
        json-c-devel libevent-devel libmpc-devel mpfr-devel \
        python3-devel python3-migen yosys trellis nextpnr
I expect packages for other Linux distributions (e.g., Debian, etc.) to be named somewhat similarly. You may have to roll your own in case distro packages are not fresh enough to support the latest features. I recommend the following (or newer) git snapshots for each tool:
2.2.2. Building the C Cross-Compiler Toolchain
Pre-built binaries exist for this step, and may be more convenient for quick one-off experimentation. Also, RISC-V support should be fully upstreamed in GCC v.9. However, this offers a nice, self-contained bundle of both the compiler and runtime libraries as a source distribution, and is still fairly recent, so I'm using it for now:
    git clone --recursive https://github.com/riscv/riscv-gnu-toolchain
    pushd riscv-gnu-toolchain
    ./configure --prefix=$HOME/RISCV --enable-multilib
    make newlib linux
This may take several hours to build, so it might make sense to start it inside a screen session, and have it email you when finished... :)

Be sure to add "$HOME/RISCV/bin" to your $PATH. E.g, in bash:
    export PATH=$PATH:$HOME/RISCV/bin
2.2.3. Setting up LiteX
To install the LiteX SoC environment, follow these steps:
    mkdir ~/LITEX; cd ~/LITEX

    # LiteX project and its related components (unmodified upstream):
    # NOTE: This is probably done better in LiteX's *own* installer, maybe
    #       I should just learn to trust it and stop rolling my own :)

    github_clone () {
      local ACCNT=$1
      local PREFIX=$2
      local PRJLIST=$3
      for PRJ in $PRJLIST; do
        /usr/bin/git clone --recursive https://github.com/$ACCNT/$PREFIX$PRJ
        (cd $PREFIX$PRJ; /usr/bin/python3 setup.py develop --user)

    github_clone litex-hub pythondata- 'software-compiler_rt'
    github_clone enjoy-digital lite 'x eth dram pcie sata sdcard iclink video scope jesd204b'
    github_clone litex-hub lite 'spi x-boards'
    github_clone litex-hub pythondata-misc- 'tapcfg'
    github_clone litex-hub pythondata-cpu- 'lm32 mor1kx picorv32 serv vexriscv rocket'
LiteX already supports the 64-bit Rocket Chip; this includes a
submodule with pre-generated Verilog files reflecting a few LiteX-specific, configuration-only changes to Rocket Chip sources.

2.3. Building FPGA Bitstream

This section shows how to build the "hardware" for our Linux-capable 64-bit Rocket+LiteX computer, in the form of FPGA bitstream.
2.3.1. Building for the ecp5versa board
This board is programmed using completely Free/OpenSource tools (yosys/trellis/nextpnr). To build the bitstream, run:
    cd ~/LITEX
    litex/litex/boards/targets/versa_ecp5.py --build \
      --yosys-nowidelut \
      --csr-csv ./csr_ecp5versa.csv \
      --csr-data-width 32 --sys-clk-freq 60e6 \
      --with-ethernet \
      --cpu-type rocket --cpu-variant linuxd
NOTE: Since nextpnr is stochastic, and our 60MHz timing constraint is on the high side of average, I ran the following loop from a detached screen session to eventually get an acceptable solution:
    while true; do
      litex/litex/boards/targets/versa_ecp5.py --build \
        --yosys-nowidelut --nextpnr-timingstrict \
        --csr-csv ./csr_ecp5versa.csv \
        --csr-data-width 32 --sys-clk-freq 60e6 \
        --with-ethernet \
        --cpu-type rocket --cpu-variant linuxd
      if [ "$?" == "0" ]; then
        echo success | mail -s "success" your@email.here
After several iterations, I obtained the following top.svf bitstream file. After connecting your ecp5versa with the serial console, run:
    openocd -f /usr/share/trellis/misc/openocd/ecp5-versa5g.cfg \
            -c "transport select jtag; init; svf top.svf; exit"
to configure the board. Connect an ethernet cable, and you can successfully download the software via TFTP!
2.3.2. Building for the nexys4ddr board
This board requires the Vivado toolchain from Xilinx (I use version 2018.2), which is proprietary and needs licensing to run. Installation and licensing details for Vivado depend on your local circumstances, and are therefore out of scope for this document.

To build the bitstream, run:
    cd ~/LITEX
    litex/litex/boards/targets/nexys4ddr.py --build \
      --csr-csv ./csr_nexys4ddr.csv \
      --csr-data-width 32 --sys-clk-freq 75e6 \
      --with-ethernet \
      --cpu-type rocket --cpu-variant linux
When this command completes (after an hour or two), find the file named "soc_ethernetsoc_nexys4ddr/gateware/top.bit", and use it to program your nexys4ddr board. Personally, I configured my board to load the bitstream from a microSD card, but there is an option to push the bitstream over the USB console cable, or program it into the on-board SPI flash. The exact details are out of scope for this document, but this may help.

2.4. Preparing the Software Bundle

Once the FPGA board is programmed, the LiteX "bios" will run through its boot-up sequence, and eventually end up attempting to download software (bundled together as a file named "boot.bin") via TFTP. This section describes how to assemble that bundle.
2.4.1. Building BusyBox
BusyBox is built using busybox-1.31.0-rv64gc.config as its configuration file:
    cd ~/LITEX
    curl https://busybox.net/downloads/busybox-1.31.0.tar.bz2 \
         | tar xfj -
    pushd busybox-1.31.0
    cp ~/busybox-1.31.0-rv64gc.config .config
    make CROSS_COMPILE=riscv64-unknown-linux-gnu-
2.4.2. Building an Embedded InitRamFS for Linux
Linux can boot from an internally embedded file system, provided to the kernel builder in the form of a CPIO archive file. We use BusyBox to populate the binary folders of this filesystem, and create a few additional device nodes:
    cd ~/LITEX
    mkdir initramfs
    pushd initramfs
    mkdir -p bin sbin lib etc dev home proc sys tmp mnt nfs root \
             usr/bin usr/sbin usr/lib
    cp ../busybox-1.31.0/busybox bin/
    ln -s bin/busybox ./init
    cat > etc/inittab <<- "EOT"
::sysinit:/bin/busybox mount -t proc proc /proc
::sysinit:/bin/busybox mount -t tmpfs tmpfs /tmp
::sysinit:/bin/busybox mount -t sysfs sysfs /sys
::sysinit:/bin/busybox --install -s
    fakeroot <<- "EOT"
mknod dev/null c 1 3
mknod dev/tty c 5 0
mknod dev/zero c 1 5
mknod dev/console c 5 1
mknod dev/mmcblk0 b 179 0
mknod dev/mmcblk0p1 b 179 1
mknod dev/mmcblk0p2 b 179 2
find . | cpio -H newc -o > ../initramfs.cpio
    rm -rf initramfs
For added convenience, download the resulting file here:
NOTE: BusyBox is merely a quick, conveninent way to demonstrate the system's functionality. A more elaborate initial root file system, e.g., one that pivots to mounting a complete Linux distro such as Fedora or Debian could be conceivably loaded instead, for a more feature-complete Linux experience!
2.4.3. Building the Linux Kernel
Now that we have the initial root file system containing userspace utilities, we may proceed to build the kernel:
    cd ~/LITEX
    git clone https://github.com/litex-hub/linux.git
    pushd linux
    git checkout litex-rocket-rebase
    cp ../initramfs.cpio .
    make ARCH=riscv CROSS_COMPILE=riscv64-unknown-linux-gnu- \
         litex_rocket_defconfig litex_rocket_initramfs.config
    make ARCH=riscv CROSS_COMPILE=riscv64-unknown-linux-gnu-
NOTE: I added a LiteX-specific default kernel configuration file, and a driver for the LiteETH network interface (originally authored by Joel Stanley for the 32-bit "linux-on-litex-vexriscv" project); The kernel is otherwise unchanged from upstream, with no out-of-tree "hacks" whatsoever!
2.4.4. Building BBL
Somewhat improperly named, the Berkeley Boot Loader (BBL) is actually a machine (M) mode "hypervisor" of sorts, providing handlers for traps that the kernel, running in supervisor (S) mode, can not handle itself. In our particular situation, the CPU core lacks an FPU, but we still want the kernel (and userspace above it) to think floating point operations are supported. When an FP opcode comes up for execution, the resulting "invalid opcode" fault is punted all the way into M-mode, where BBL emulates the operation via its software trap handler. Similarly, the kernel's hvc/sbi console driver traps into M-mode, where BBL services character output to (and input from) the serial console.

Here's how to build BBL, with the Linux kernel from the previous step as its "payload":
    cd ~/LITEX
    git clone https://github.com/gsomlo/riscv-pk
    pushd riscv-pk
    git checkout gls-litex
    mkdir build
    cd build
    ../configure --host=riscv64-unknown-linux-gnu \
                 --with-arch=rv64imac \
                 --with-payload=../../linux/vmlinux \
                 --with-dts=../machine/litex_rocket.dts \
    make bbl
    riscv64-unknown-linux-gnu-objcopy -O binary bbl ~/LITEX/boot.bin
The resulting "boot.bin" file must be made available from a TFTP server, so that the LiteX "bios" can download and execute it. Booting from a µSD card (in SPI mode) is also currently supported.
For added convenience, download the resulting file here:

3. Demo

This is a screen capture of a terminal booting RV64IMAC Litex+RocketChip on a nexys4ddr board. The system was built using the instructions above. The video shows the kernel version, userspace/kernel view of the supported CPU model ("f" and "d" are emulated by BBL), and demonstrates the ability of the network interface to send and receive packets:

4. Building ecp5versa Bitstream Natively (on rv64gc)

Following these instructions, set up the latest available "Rawhide image" as either a libvirt or QEMU guest.

Next, follow the instructions shown in Subsections 2.2.1 and 2.2.3 to install the development environment, including the hardware toolchain and LiteX.

Finally, build a Linux-capable riscv64 LiteX/Rocket SoC bitstream for the ecp5versa board, natively on Fedora-riscv64, by repeating the instructions given in Section 2.3.1.

Check out the build process log, the resulting (55MHz capable) bitstream file, and a screen capture of Linux booting on the ecp5versa board programmed with the natively built bitstream:

5. The Full Self-Hosting Demo

I built bitstream for a Trellis Board, using the following steps:
    cd ~/LITEX
    litex-boards/litex_boards/targets/trellisboard.py --build \
      --yosys-nowidelut \
      --csr-csv ./csr_trellisboard.csv \
      --csr-data-width 32 --sys-clk-freq 60e6 \
      --with-ethernet --with-spi-sdcard \
      --cpu-type rocket --cpu-variant linuxq
I then mounted a root filesystem cloned from my Fedora-rv64 VM using the (slow) SPI-based µSD card, and managed to get the HDL toolchain components (yosys and nextpnr-ecp5) running, which means that, technically, this computer is indeed capable of self-hosting!

Here's a demo video of LiteX+Rocket launching its own HDL tool chain from the Fedora root filesystem on the µSD card:
Check out the transcript if you don't want to wait for the slow video!

6. Future Work