Starfive visionfive 2: home cookbook

от автора

Intro

Recently I have bought starfive visionfive-2 SoC for my own experiments, honestly speaking
I am striving to work with risc-v. After some time I decided to share my experience. Here my bulletpoints:

  • Small preparation

    • USB-to-Serial connector

    • Write image to microSD/SSD

    • Set boot mode settings

  • Boot

    • Update bootloader

    • Build kernel

    • Native build

    • Pod build

    • Cross-build on amd64: fast and handy

  • Chroot to risc-v system from amd64 and install packages

  • Bonus 1: run qemu with risc-v

  • Bonus 2: build deb packages for risc-v

  • Bonus 3: kernel build script

  • Conclusions

  • Chapter and verses

I will not write «RISC-V is a modern architect…» or such stuff.

ATTENTION

This article is based on 2 postulats:

  • You are well-experienced linux user and are not scared with command line

  • You are keen on manuals

Small preparation

Before start, please, make 3 mandatory actions:

  • Connect usb-to-serial connector

  • Write starfive official debian image to mmc card (such instructions are also valid for ssd)

  • Set boot mode by hardware switchers

USB-to-Serial connector

I will apply «Prolific Technology, Inc. PL2303 Serial Port / Mobile Phone Data Cable». Full information about GPIO (regarding visionfive-2) may be found in Visionfive pin gpio header.

Pic. 1. Prolific cable

Pic. 1. Prolific cable

Pic. 1. Prolific cable

Pic. 2. Connect Prolific cable to visionfive 2

Pic. 2. Connect Prolific cable to visionfive 2

Pic. 2. Connect Prolific cable to visionfive 2

To connect use this command:

sudo minicom -D /dev/ttyUSB0 

You will see welcome message:

Welcome to minicom 2.9  OPTIONS: I18n  Port /dev/ttyUSB0, 10:42:35  Press CTRL-A Z for help on special keys 

Write image to microSD/SSD{#id01}

Detailed instructions are here. I am too lazy to repeat official instructions.

Pay your attention to Extend Partition on SD Card or eMMC part

Set boot mode settings

As usual, detailed instructions may be found in official manual.
I will show my preferences for microSD/ssd boot.

Pic.3 . Visionfive 2 boot switchers

Boot

Ok, boot procees description you may find in VisionFive 2 Single Board Computer Software Technical Reference Manual.
Also you should read Updating SPL and U-Boot of Flash .

First of all, factory bootloader is unable to boot from ssd. You need to update it. How to do it?
Of couse complete information is placed in official manual. I will show essential information.

Pic.4. Boot process diagram

OpenSBI definition from opensbi readme:

The RISC-V Supervisor Binary Interface (SBI) is the recommended interface between:

  • A platform-specific firmware running in M-mode and a bootloader, a hypervisor or a general-purpose OS executing in S-mode or HS-mode.

  • A hypervisor running in HS-mode and a bootloader or a general-purpose OS executing in VS-mode.

So, boot simplified sequence is:

  • Hardware-spicific ROM start. You can’t change anything in ROM

  • SPL («Secondary Program Loader. Sets up SDRAM and loads U-Boot properly. It may also load other firmware components.» — from u-boot docs). Yes, we will update it.

  • OpenSBI. We will update it too.

  • OS (is out case — debian)

Be carefull: smart and slow. Dangerous part

Update bootloader

Well, you should download 2 files:

  • spl firmware: u-boot-spl.bin.normal.out

  • payload: visionfive2_fw_payload.img

You can build those files with official manual, but I am too lazy. Thus, I got these files from github.

Let’s place these files to microSD card, I prefer root dir (/). You may use tftp-boot instead of microsd.

Don’t be scared if ‘flashcp’ command is unavaible. I connected board by ethernet to my router and installed it by ‘apt-get install mtd-utils’

As far as everybody doesn’t want to read official manuals, I will show crutial parts.
Well, after SoC power-on and serial connection we may update spl and opensbi.
First step: check MTD partitions:

root@starfive:~# cat /proc/mtd dev:    size   erasesize  name mtd0: 00080000 00001000 "spl" mtd1: 00010000 00001000 "uboot-env" mtd2: 00400000 00001000 "uboot" mtd3: 00a00000 00001000 "reserved-data" 

Check files:

root@starfive:~# ls /u-boot-spl.bin.normal.out /visionfive2_fw_payload.img  -l -rw-r--r-- 1 root root  154688 Jul 28  2024 /u-boot-spl.bin.normal.out -rw-r--r-- 1 root root 3016917 Jul 28  2024 /visionfive2_fw_payload.img 

Ok, ‘mtd0: 00080000 00001000 «spl»‘ is SPL area (u-boot-spl.bin.normal.out), ‘mtd2: 00400000 00001000 «uboot»‘ is opensbi part.
Step 2. Update SPL

flashcp -v u-boot-spl.bin.normal.out /dev/mtd0 

Step 3. Update opensbi

flashcp -v visionfive2_fw_payload.img  /dev/mtd2 

Step 4. Write OS image to ssd like you did it with microsd.
Step 5. Fix bootloader config
Step 6. Reboot and pray

reboot 

Ok, by default bootloader config wants to boot from microSD. I will fix it.
Step 7.1
Connect your ssd to Write image to microSD/SSD. See partitions:

Disk /dev/sde: 119,24 GiB, 128035676160 bytes, 250069680 sectors Disk model: RTL9210B-CG Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: gpt Disk identifier: 1FCD8673-3F11-4325-92C9-AE6CA9DD5AE2  Device      Start     End Sectors  Size Type /dev/sde1    4096    8191    4096    2M HiFive BBL /dev/sde2    8192   16383    8192    4M HiFive FSBL /dev/sde3   16384  221183  204800  100M EFI System /dev/sde4  221184 8189918 7968735  3,8G Linux filesystem 

Step 7.2
My device is /dev/sde. Partition 4 is root, 3 is bootloader area. Let’s mount it:

mkdir -p /media/root mount /dev/sde4 /media/root/ mount /dev/sde3 /media/root/boot/ 

Step 7.3.
Fix bootloader. You need to replace ‘mmcblk0’ substring with ‘nvme0n1’. It’s enough:

sed -i 's/mmcblk0/nvme0n1/g' /boot/extlinux/extlinux.conf 

Step 7.4
Umount ssd from your computer, connect to your board and boot. I would like to recommend: disconnect microSD from board.

Build kernel

Well, let’s clone kernel sources, target kernel version is 6.11. Why 6.11? Because it has pci-e support for our board. I prefer to have full git, with all versions:

git clone https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git cd linux git checkout v6.11-rc6 

Now, we need defconfig. I got it from official starfive github and placed to linux/arch/riscv/configs:

2-bit.config                   nommu_k210_sdcard_defconfig 64-bit.config                   nommu_virt_defconfig defconfig                       starfive_visionfive2_defconfig nommu_k210_defconfig 

There are 3 ways to build kernel for other arch, for example for risc-v:

  • Native build on risc-v CPU

  • POD (docker/podman) build with bin-fmt

  • Cross-build on foreign arch (amd64 for me)

Native build

Too slow method. My results:

time make -j 4 real    57m49.006s user    216m36.988s sys     12m20.346s 

Pod build

O feel strongly that it’s a hubrid method:

  • run on amd64

  • but in native environment (risc-v arch).

Binfmt is ability to run foreign-arch programs in Linux. From Debian:

Versions 2.1.43 and later of the Linux kernel have contained the binfmt_misc module. This enables a system administrator to register interpreters for various binary formats based on a magic number or their file extension, and cause the appropriate interpreter to be invoked whenever a matching file is executed

Steps:

  • Install binfmt

  • Get risc-v container

  • Run it

  • Build kernel inside this POD

Let’s install binfmt for debian on my host machine:

apt-get install binfmt-support 

Now, get container:

podman manifest inspect docker.io/library/debian:sid | grep architecture podman pull --arch=riscv64 debian:sid 

Now I will run podman container with debian risc-v image interactivly with 2 dirs mount ( ~/dev/general-scripts and current dir):

# run container podman run --platform linux/riscv64  -v `pwd`:/volume  -v ~/dev/general-scripts:/scripts -it  debian:sid # add my scripts path to $PATH PATH=/scripts:$PATH # install build dependencies: apt-get update apt-get install wget debhelper vim build-essential linux-source bc kmod cpio flex libncurses5-dev \             libelf-dev libssl-dev zstd dwarves bison python3 clang lld llvm pahole \             devscripts lintian debmake llvm initramfs-tool 

Now, repeat our build:

make clean # kernel time make  -j 30 LOCALVERSION=-alexey real    9m24.598s user    259m21.092s sys     7m46.305s # modules time make  -j 30 modules  real    0m18.537s user    4m31.868s sys     0m14.254s # packaging to deb time bindeb-pkg -j 30 real    1m52.087s user    7m24.727s sys     0m56.589s 

Summary time is 12 min approximately. Much better in contrast to native build. And it’s with modules build and packaging!

Cross-build on amd64: fast and handy

Ok, we can see that native-build on non-powerful risc-v CPU is many times slower than podman build on powerful amd64 CPU (in my case — ryzen 9 7800x). But we know that cross-compilation will be faster because of binary translation abscence.
We should set some variables to make kernel cross-build:

  • CROSS_COMPILE=riscv64-linux-gnu-

  • ARCH=riscv

F.e.: «make CROSS_COMPILE=riscv64-linux-gnu- ARCH=riscv starfive_visionfive2_defconfig»
Command sequence to build:

    pushd $kernelDir     nproc=30     # clean and build     make CROSS_COMPILE=riscv64-linux-gnu- ARCH=riscv starfive_visionfive2_defconfig     make CROSS_COMPILE=riscv64-linux-gnu- ARCH=riscv clean      make CROSS_COMPILE=riscv64-linux-gnu- ARCH=riscv -j $nproc LOCALVERSION=-alexey     make CROSS_COMPILE=riscv64-linux-gnu- ARCH=riscv -j $nproc modules      # remove old artefacts and make new deb-packages, clarify unnecessary files     rm -rf $kernelDir/../*.deb     make CROSS_COMPILE=riscv64-linux-gnu- ARCH=riscv bindeb-pkg -j $nproc     rm -rf $kernelDir/../\*.buildinfo     rm -rf $kernelDir/../\*.changes     popd 

See artefacts:

>ls *.deb linux-headers-6.11.0-rc3+_6.11.0-rc3-00290-g61315d5419f5-22_riscv64.deb linux-image-6.11.0-rc3+_6.11.0-rc3-00290-g61315d5419f5-22_riscv64.deb linux-libc-dev_6.11.0-rc3-00290-g61315d5419f5-22_riscv64.deb 

Last step: determine kernel version:

  kernelVersion=`dpkg -I $kernelDir/../linux-image-*riscv64.deb| \     grep Package:|awk '{print $NF}'|grep -Eo "[0-9]{1}[-.0-9+a-z]+"` 

Chroot to risc-v system from amd64 and install packages

Ok, we have some deb-packages. How-to install them: copy to board or…chroot?
I prefer chroot because:

  • Faster

  • I may use scripts on my host machine

How to do it:

# mount ssd to host. /dev/sda4 is ssd for board installDir="/media/alexey/root" mount /dev/sda4 $installDir mount /dev/sda3 $installDir/boot # add qemu support package apt-get install qemu-user-static cp $(which /usr/bin/qemu-riscv64-static) $installDir/usr/bin # prepare chroot sudo mount -o bind /proc $installDir/proc sudo mount -o bind /sys $installDir/sys sudo mount -o bind /run $installDir/run sudo mount -o bind /dev $installDir/dev # make chroot. And now I can install kernel # remove old kernel sudo chroot $installDir apt remove -y --purge linux-image-6.11\* ||true  # install new sudo chroot $installDir /bin/bash -c "apt-get install /linux-image-6.*.deb" # copy device tree files to /boot/ sudo mkdir -p $installDir/boot/dtbs/$kernelVersion sudo cp -r $installDir/usr/lib/linux-image-${kernelVersion}/* $installDir/boot/dtbs/$kernelVersion/  sudo cp $installDir/boot/dtbs/$kernelVersion/starfive/jh7110-starfive-visionfive-2-v1.3b.dtb \   $installDir/boot/dtbs/$kernelVersion/starfive/jh7110-visionfive-v2.dtb  # it's my new config for extlinux menu. Will be shown later sudo cp $myDir/extlinux-alexey.conf $installDir/boot/extlinux/extlinux.conf sudo sed -i "s|KERNELVERSION|$kernelVersion|g" $installDir/boot/extlinux/extlinux.conf # umount umount $installDir -R 

About extlinux config. I added new options l0 and l0r for new kernel and shifted label numbers for official kernels:

Hidden text
## /boot/extlinux/extlinux.conf ## ## IMPORTANT WARNING ## ## The configuration of this file is generated automatically. ## Do not edit this file manually, use: u-boot-update  default l0 menu title U-Boot menu prompt 0 timeout 50   label l0 menu label KERNELVERSION linux /vmlinuz-KERNELVERSION initrd /initrd.img-KERNELVERSION  fdtdir /dtbs/KERNELVERSION # append root=UUID=be4024ce-820d-4b33-a122-914aeae8d4bd root=/dev/mmcblk0p4 rw console=tty0 console=ttyS0,115200 earlycon rootwait stmmaceth=chain_mode:1 selinux=0 append root=UUID=be4024ce-820d-4b33-a122-914aeae8d4bd root=/dev/nvme0n1p4 rw console=tty0 console=ttyS0,115200 earlycon rootwait stmmaceth=chain_mode:1 selinux=0  label l0r menu label KERNELVERSION (rescue target) linux /vmlinuz-KERNELVERSION initrd /initrd.img-KERNELVERSION  fdtdir /dtbs/KERNELVERSION append root=UUID=be4024ce-820d-4b33-a122-914aeae8d4bd root=/dev/nvme0n1p4 rw console=tty0 console=ttyS0,115200 earlycon rootwait stmmaceth=chain_mode:1 selinux=0 single   label l1 menu label Debian GNU/Linux bookworm/sid 6.1.31-starfive linux /vmlinuz-6.1.31-starfive initrd /initrd.img-6.1.31-starfive   fdtdir /dtbs/6.1.31 append root=UUID=be4024ce-820d-4b33-a122-914aeae8d4bd root=/dev/nvme0n1p4 rw console=tty0 console=ttyS0,115200 earlycon rootwait stmmaceth=chain_mode:1 selinux=0  label l1r menu label Debian GNU/Linux bookworm/sid 6.1.31-starfive (rescue target) linux /vmlinuz-6.1.31-starfive initrd /initrd.img-6.1.31-starfive  fdtdir /dtbs/6.1.31 append root=UUID=be4024ce-820d-4b33-a122-914aeae8d4bd root=/dev/nvme0n1p4 rw console=tty0 console=ttyS0,115200 earlycon rootwait stmmaceth=chain_mode:1 selinux=0 single   label l2 menu label Debian GNU/Linux bookworm/sid 5.15.0-starfive linux /vmlinuz-5.15.0-starfive initrd /initrd.img-5.15.0-starfive   fdtdir /dtbs/5.15.0 append root=UUID=be4024ce-820d-4b33-a122-914aeae8d4bd root=/dev/nvme0n1p4 rw console=tty0 console=ttyS0,115200 earlycon rootwait stmmaceth=chain_mode:1 selinux=0  label l2r menu label Debian GNU/Linux bookworm/sid 5.15.0-starfive (rescue target) linux /vmlinuz-5.15.0-starfive initrd /initrd.img-5.15.0-starfive  fdtdir /dtbs/5.15.0 append root=UUID=be4024ce-820d-4b33-a122-914aeae8d4bd root=/dev/nvme0n1p4 rw console=tty0 console=ttyS0,115200 earlycon rootwait stmmaceth=chain_mode:1 selinux=0 single  

Now we can reboot and see new kernel in boot list.

Bonus 1: run qemu with risc-v

First step is: install mandatory software:

apt install qemu-system-misc opensbi u-boot-qemu 

Next step is: make disk image. I will use ready image from [Debian Quick Image Baker pre-baked images
(https://people.debian.org/~gio/dqib/). Choose «Images for riscv64-virt», download and extract it:

qemu>cd .dqib_riscv64-virt/ dqib_riscv64-virt>ls image.qcow2  kernel      ssh_user_ecdsa_key    ssh_user_rsa_key initrd       readme.txt  ssh_user_ed25519_key 

Last step: run it! I will start virtual machine with:

  • 8 cpu

  • 1G RAM

  • image.qcow2 drive

  • ethernet

  • port forwarding from 2222 host to 22 port virtmachine

  • u-boot

  • and without graphic

qemu-system-riscv64 -machine virt -m 1G -smp 8 -cpu rv64 \ -device virtio-blk-device,drive=hd \ -drive file=debian-foreign-arch/image.qcow2,if=none,id=hd \ -device virtio-net-device,netdev=net \ -netdev user,id=net,hostfwd=tcp::2222-:22 \ -kernel /usr/lib/u-boot/qemu-riscv64_smode/uboot.elf \ -object rng-random,filename=/dev/urandom,id=rng \ -device virtio-rng-device,rng=rng \ -nographic -append "root=LABEL=rootfs console=ttyS0" \ -virtfs local,path=/home/alexey/risc-v/,security_model=none,mount_tag=risc-v 

One tip. How-to mount host dir to virtmachine dir? I prefer to use 9p.

Run it in qemu:

mkdir -p /home/alexey/risc-v mount -t 9p -o  _netdev,trans=virtio,version=9p2000.u,msize=104857600 risc-v /home/alexey/risc-v 

Bonus 2: Build deb packages for risc-v

The «Great answer» is simple: as usual, but in container.
Cross-compile for full system (debian with GUI) is pretty risky.

Well, let’s start container:

podman run --platform linux/riscv64  -v `pwd`:/volume  -v ~/dev/general-scripts:/scripts -it  debian:sid 

Now run commands in podman:

# prepare PATH=/scripts:$PATH # install software apt-get update apt-get install wget debhelper vim build-essential bc kmod cpio flex python3 \   devscripts lintian debmake  # get package rm -rf /scripts/work/vim mkdir -p /scripts/work/vim cd /scripts/work/vim/vim-9.0.1378/ dget http://deb.debian.org/debian/pool/main/v/vim/vim_9.0.1378-2.dsc  # install build dependencies mk-build-deps -i '--tool=apt-get -o Debug::pkgProblemResolver=yes --no-install-recommends --yes' # drink coffee/tee and wait # clean debs artefacts rm -rf *build-deps_*  # build it debuild -us -uc 

Bonus 3: kernel build script

I wrote kernel build script. It uses cross-compilation, starfive original defconfig and make deb-packages. Also it can install builded kernel to mounted storage with your rootfs.

I paced it hear: kernelbuild-script.
Of couse you can customize it for your tasks.

There are two mandatory keys:

  • -b — build kernel

  • -i — install kernel

Conclusions

Starfive visionfive-2 looks like a well-tailored and supported device. You may use ready image and kernels. There are several patches already accepted to linux kernel.

Regarding risc-v arch, I would like to say: there are some tips:

  • Build kernel with cross-compilation

  • Build packages in containers with binfmt

  • Use chroot for fast development

  • Qemu is cool 😀

Chapter and verses

VisionFive 2 Single Board Computer Software Technical Reference Manual
VisionFive 2 Single Board Computer Quick Start Guide
Visionfive pin gpio header
Starfive visionfive on github
JH7110 Upstream Status
Opensbi definition
u-boot spl boot docs
Kernel main website
Binfmt on debian manpages
Debian quick image baker
kernelbuild-script


ссылка на оригинал статьи https://habr.com/ru/articles/843512/


Комментарии

Добавить комментарий

Ваш адрес email не будет опубликован. Обязательные поля помечены *