Table of Contents

The Cyberotter

Cyberotter is the nickname for ntnu-otter-04, as its initial use was related to cybersecurity by Petter Solnør and Øystein Volden.

The main differences between Cyberotter and the FishOtter are:

They use the same interface to the Torqueedo board, and rely on the same control, both by Nikolai Lauvås

The Cyberotter is equipped with one Harxon GPS-1000 GNSS survey antenna in the bow and one in the stern, both are connected to its own Ublox F9P-ZED GNSS receiver. The stern receiver (its config)is configured as a ‘moving base’ to send RTCM corrections to the bow (its config), set up as ‘rover’. This enables the bow receiver to precisely calculate heading. This heading is used along with the position of the bow antenna to control the Cyberotter.

As the Otter is underactuated, 2D position and heading is sufficient to control it. A simple improvement would be to use the center position of the two GNSS receivers as the position of the Otter, not the bow receiver.

The Cyberotter is also equipped with an ADIS 16490 IMU. Currently this is only used to log acceleration and angular rate, but could in the future, or offline, be used to aid the position and heading, and also to estimate roll and pitch. TODO: How is the ADIS configured?

Both the GNSS receivers and the IMU are interfaced through the SenTiBoard, to ensure precise timing:

The Cyberotter setup also includes an SBG Ellipse2-D, a standalone INS. This is fed with GNSS signals from the same antennas as the Ublox bow/stern, through antenna splitters. This is included as it is a field-proven INS, and as the Cyberotter itself does not currently provide roll and pitch estimates. The SBG is interfaced with ROS, through the SBG ROS driver. The data from the SBG is also sent to DUNE, for logging, through the ROS IMC bridge.

SBG Driver Config

We use a ROS driver for interfacing with SBG Ellipse 2D: https://github.com/SBG-Systems/sbg_ros_driver.

IMC - ROS Bridge

The IMC ROS bridge developed by the Swedish Maritime Robotics Centre (SMaRC) was used to map ROS topics from the SBG driver to the IMC format. The IMC messages were then transmitted from the Nvidia Jetson to DUNE running on the Beaglebone Black. Link: https://github.com/smarc-project/imc_ros_bridge

DUNE Navigation pipeline

The data from the SenTiBoard is parsed in a task in DUNE (Sensors/Sentiboard), that sends out IMC::Acceleration, IMC::AngularRate, IMC:UbxRelPosNED and IMC::GpsFix.

The UbxRelPosNED message is consumed by Sensors/Heading, which: * Generates a heading (IMC::EulerAngles psi) * TODO: determines the center position of the Otter, sent out as another GpsFix message

The Navigation/General task consumes: * GpsFix from Sensors/Heading, as this is the center position for the Otter * EulerAngles from Sensors/Heading, giving yaw which is assembled to an EstimatedState message, dispatched to the controllers (and the rest of DUNE).

RTK configuration

To obtain heading from the bow GNSS receiver, the bow GNSS receiver is fed with RTCM corrections from the stern receiver (configured as moving base).

Configuration of u-Blox GNSS receivers is done through the u-center software as follows:

Following the directions above, set the following configurations (Largely taken from: the Ublox moving base application note):

The BBB in the RTK base must be configured to pipe the RTCM messages coming into UART2 to a TCP server that the BBB on-board the Cyberotter listens to. This is accomplished by doing the following:

The RTCM3.3 messages described above contain the following information:

   RTCM3.3 1005: Stationary RTK Reference Station Antenna Reference Point (ARP). 
*  RTCM3.3 1074: GPS Type 4 Multiple Signal Message for the American GPS system. 
*  RTCM3.3 1084: GLONASS Type 4 Multiple Signal Message for the Russian GLONASS system.
*  RTCM3.3 1094: Galileo Type 4 Multiple Signal Message for the European Galileo system.
*  RTCM3.3 1124: Beidou Type 4 Multiple Signal Message for the Chinese Beidou system.
*  RTCM3.3 1230: RTCM 1230: GLONASS L1 and L2 Code Phase Biases.
*  RTCM3.3 4072: u-Blox-specific message which provides location and vector data of the moving base (Stern).

Troubleshooting

Time Synchronization

The sync “pipeline” is set up as follows: ### PPS ### GPS PPS line is connected to a GPIO pin, and read by the linux kernel, since the PPS kernel module is built in.

The PPS GPIO is set up by the following lines in the device tree file (double check with the yocto repo for the current setup):

    pps {
        pinctrl-names = "default";
        pinctrl-0 = <&pps_pins>;
        compatible = "pps-gpio";
        status = "okay";
        gpios = <&gpio2 6 0>;
        assert-rising-edge;
    };

    pps_pins: pinmux_pps_pins {
        /* lcd_data7 / gpio2_13 */
        pinctrl-single,pins = <0x0a0 (PIN_INPUT_PULLDOWN | MUX_MODE7)>;
    };

If making changes, make sure that the pin you choose is not used/configured for anything else. See device tree configuration

The kernel is configured to include PPS by the following lines in its (def)config:

CONFIG_PPS=y
CONFIG_NTP_PPS=y

With the device tree and kernel changes, a PPS device, like /dev/pps0 shows up. pps-tools are installed in yocto, enabling testing by e.g. ppstest /dev/pps0.

NTP

The NTP server use timing information from two sources: PPS and NMEA ZDA.

This relies on the yocto packages ntp, ntp-tickadj, ntpq, ntp-utils, and ntpdc. Once it is set up, you can inspect the NTPD status using ntpq -p, which will print the offset and jitter of the two sources (NMEA and PPS).

PPS

ntp.conf is set up with

# GPS: PPS/ATOM from pps-gpio service
server 127.127.22.0 maxpoll 3 minpoll 5 iburst
fudge 127.127.22.0 refid PPS time1 0.000001
fudge 127.127.22.0 flag3 1

This sets up NTPD to use the PPS Clock discipline found at /dev/pps0 (hence 127.127.22.0), name it PPS (refid PPS), with an initial offset of 0.000001 seconds. See the driver docs for more details.

NMEA ZDA

Sent out by the bow receiver on UART2TX comming in on the BBB on UART5 (/dev/ttyO5).

Using GPSD

Initially, the NMEA message was parsed by GPSD, and shared with NTPD over a shared-memory clock, but this was abandoned to reduce the latency. This can be configured in ntp.conf

GPS Shared Memory Clock from GPSD
server 127.127.28.0 minpoll 3 maxpoll 5 iburst prefer # This must be preferred
fudge 127.127.28.0 refid GPS # Name it GPSD
fudge 127.127.28.0 time1 0.150 flag1 1 # Estimate processing delay offset to 155ms

“The magic pseudo-IP address 127.127.28.0 identifies unit 0 of the ntpd shared-memory driver (NTP0); 127.127.28.1 identifies unit 1 (NTP1).” (from GPSD docs)

GPSD can also parse other messages from the receiver, and will display this information in a CLI invoked by calling gpsmon. The yocto packages used for this setup were gpsd, gps-utils, gpsd-udev, gpsd-conf and gpsd-gpsctl (it is likely that not all are needed).

Directly parsed by NTPD

NTPD can also parse the ZDA NMEA message itself (but only a very limited set of messages, as opposed to GPSD). This is the current setup on the Cyberotter, and is achieved by the following lines in ntp.conf:

# NMEA refclock driver directly from serial port
server 127.127.20.0 mode 56  minpoll 4 iburst prefer true
fudge 127.127.20.0 flag1 0 flag2 0 flag3 0 flag4 0 time1 0.00 refid GPS-NMEA

Mode 56 means linespeed 38400 bps and to process ZDA or ZDG. See NTP driver docs for more information. 127.127.20.X means that NTPD will use the NMEA Reference Clock Driver and listen for messages at /dev/gpsX, while the receiver is connected to /dev/ttyO5. This is fixed by adding the following udev rule (recipes-ntnu/ntp/files/99-gps-name.rules):

KERNEL=="ttyO5", SUBSYSTEM=="tty", SYMLINK+="gps0", RUN+="/bin/stty -F /dev/ttyO5 speed 38400"

PTP

The advantage of time sync using PTP is that is a more accurate solution, as it is more HW near, residing in the PHY layer of the stack. This means that you do not need to connect to a server IP to use it, you only need to be plugged in to the same network. Performance hinges on HW support. There are several implementations of the PTP IEEE standard, and the Cyberotter use Linuxptp (installed in yocto using the linuxptp package, with a custom append to set up systemctl services and configuration files). It consists of two parts - ptp4l: the ptp daemon itself - phc2sys: responsible for syncing the PTP clock with the system clock There is also the pmc tool to request status messages from the PTP, e.g. pmc -u -b 0 'get CURRENT_DATA_SET' PTP is set up by following the Ouster PTP guide chapter 15.6.2 Configuring ptp4l as a Local Master Clock and 15.6.3 Configuring phc2sys to Synchronize the System Time to the PTP Clock

Unfortunately, the PTP documentation is poor, but here are some useful links: - Time-sensitive Networking has some tips on what master clock offsets to expect when the clocks are synced (< 100 ns). The values shown by ptp4l and phc2sys logs are in nanoseconds. It also points out the difference between CLOCK_TAI (atomic, i.e. continous) and CLOCK_REALTIME (UTC, which is offset from TAI by an offset that increases as leap seconds are added (37 as of 2017)). - Our working theory is that it does not matter (for these applications) if you use UTC or TAI, as long as you are consistent. - Red Hat Linux PTP guide - List of different Clock classes

ptp4l config

ptp4l -i eth0 -f /etc/ptp4l.conf The config (ptp4l.conf) is the default config for e.g. ubuntu, but clockClass has been set to 6 (master only) for Beaglebone. By this, we force Beaglebone to be PTP grand master clock. Also set priority level 1 and 2 to the same clockclass (6). Leave all other parameters in /etc/ptp4l.conf to default values. The full config is in recipes-ntnu/linuxptp/files/ptp4l.conf.

The Jetson Xavier is a slave in the PTP setup. In /etc/linuxptp/ptp4l.conf , set clockclass to 255 (slave only). Set priority level 1 and 2 to the same clockclass (255). Also, ensure slaveOnly mode is enabled. Leave all other parameters to default values.

phc2sys config

phc2sys is set to sync the Linux CLOCK_REALTIME (as set up by NTPD) to the slave PTP hardware clock of the Ethernet controller (eth0):

phc2sys -w -s CLOCK_REALTIME -c eth0

where -w signifies that it will wait for ptp4l. See recipes-ntnu/linuxptp/files. Any ptp slaves should run phc2sys with the PTP HW clock of the ethernet controller as the master, and CLOCK_REALTIME as slave:

phc2sys -w -c CLOCK_REALTIME -s eth0

Useful commands

Status PTP:

systemctl status ptp4l

Restart PTP:

systemctl restart ptp4l

Check the status of every actor connected:

pmc –u –b 1 ‘GET TIME_STATUS_NP’

Verify that the slaves are in fact configured to be slaves:

pmc –u –b 1 ‘GET SLAVE_ONLY’

Yocto Linux configuration

Device tree configuration

Some useful links for sett

Vision sensors

The cameras and the LiDAR are interfaced through ROS drivers:

IP info:

Device IP
RTK base BBB 10.0.60.45
Otter BBB 10.0.63.10
Land Rocket AP 192.168.1.20
Otter Bullet 192.168.1.23
Payload Jetson 10.0.63.105
ZED 2 stereo camera X.X.X.X
FLIR camera left 169.254.121.156
FLIR camera right 169.254.123.156
Ouster LIDAR X.X.X.X