Dataset Viewer
Auto-converted to Parquet
id
stringlengths
25
96
input
stringlengths
137
1.08M
output
stringlengths
501
1.6k
instruction
stringclasses
5 values
num_tokens
int64
73
522
arxiv-format/2011_08106v1.md
# Recovering and Simulating Pedestrians in the Wild Ze Yang\\({}^{1,2}\\), Siva Manivasagam\\({}^{1,2}\\), Ming Liang\\({}^{1}\\), Bin Yang\\({}^{1,2}\\), Wei-Chiu Ma\\({}^{1,3}\\), Raquel Urtasun\\({}^{1,2}\\) Uber Advanced Technologies Group\\({}^{1}\\), University of Toronto\\({}^{2}\\), MIT\\({}^{3}\\) {zey,manivasagam,ming.liang,byang10,weichiu,urtasun}@uber.com ## 1 Introduction A key requirement for mobile robots is that they interact and maneuver safely around humans. This is especially the case in autonomous driving, where the self-driving car should perceive in 3D each pedestrian in the scene and forecast their future trajectories. To deploy in the real world, we must verify that our autonomy system is robust and handles safety-critical cases such as a child occluded by a bus running in front of the car. However, it is unethical to test such cases in the real-world. Moreover, it is expensive and not scalable to collect and manually label the full distribution of pedestrian scenarios to generate training and testing data for current ML-based perception systems. An appealing alternative is leveraging realistic sensor simulation systems to train and test the perception system. Here we focus on simulating realistic traffic scenes with pedestrians for the LiDAR sensor, a common sensor in self-driving. However, pedestrians are especially difficult to simulate; unlike vehicles, they are non-rigid objects that have a wide variety of shape, poses, and behaviors. There are two lines of work when it comes to sensor simulation of pedestrian assets. One approach is to use artist-designed human meshes (e.g., CARLA [1]). Another is to use high-end 3D scanning systems in a controlled lighting setting with multiple cameras and/or depth sensors to create high-resolution human meshes [2, 3, 4, 5]. Both approaches require an artist to \"rig\" and animate behaviors for each human, which requires significant effort: the artist must first add a skeleton to the mesh for skinning and posing the character and then design the sequence of joint angles for the pedestrian skeleton required to simulate a particular behavior. While these approaches have been widely used for creating realistic looking pedestrians in video games and movies, they are expensive and not scalable: it is difficult to manually create or 3D scan all the diverse variations in shape, pose, and trajectories a pedestrian may take in the real-world. There has also been a large body of prior work on estimating 3D pose and shape from single images [6, 7, 8, 9, 10] or video [11, 12, 13, 14]. This is a more scalable solution, as images and videos of people are everywhere. However, image-only methods are prone to having incorrect location/movement estimates in 3D and can sometimes produce unrealistic looking meshes due to inaccurate depth estimates. As a consequence, while they have produced visually appealing results, which might be sufficient in some application domains (e.g., augmented reality, online games), their 3D fidelity is not sufficient when simulating pedestrian LiDAR readings. Towards this goal, we leverage real-world sensor data captured by our autonomous driving fleet, which contain LiDAR point clouds and camera image sequences, to recover accurate 3D motion and shapes of pedestrians. Our approach, LiDAR for human **M**esh **E**stimation (LiME), only requires a single low-cost artist-created mesh that we exploit to create a prior over human shapes, which we then pose and deform to match the sensor data. We leverage the power of both deep learning and energy minimization methods to accurately recover shape and pose in the wild when no ground-truth is available. To simulate the virtual world, we use a realistic LiDAR simulation system, LiDARsim [15], which uses real world data to generate a large collection of realistic background meshes of different scenes as well as vehicle assets. We then enhance it with a diverse bank of pedestrian shapes and poses reconstructed in the wild using LiME. We can then generate novel scenarios by selecting pedestrians in our bank, applying motion retargeting, and placing them in the scene. LiDARsim then renders the scene to generate realistic LiDAR point clouds. We show that we can generate simulated LiDAR that has little to no sim2real domain gap, which allows us to evaluate a state-of-the-art perception system. Furthermore, we demonstrate that when generating low-cost simulation data at scale for training data augmentation, we reduce the need for labeled data. ## 2 Related Work 3D Human Pose and Motion Estimation:Human motion capture (MoCap) is usually conducted in highly-calibrated and laboratory-controlled environments. With the help of multi-view sensing [16] and marker-based technology [2], many high-quality dynamics measurements [4; 5; 17] have been collected, including accurate 2D and 3D skeletal joint locations over time. Based on these datasets, several methods have been developed to predict 3D human pose and motion from monocular images [18; 19; 20] and monocular video [21; 22; 23], achieving state-of-the-art performance. Unfortunately, these data, while useful, are still over-simplified. Numerous real world scenarios are not captured, _e.g._, environmental occlusions. To overcome such limitations, recent work has focused on capturing large scale \"in-the-wild\" datasets with 3D pose using IMU and cameras [24; 25; 26]. Most efforts still focus on pose estimation from images. However, they have difficulty obtaining precise shape and pose because accurate depth is missing. We require more accuracy for simulating pedestrian scenarios and testing autonomy. Recent work [27; 28] have proposed using RGB-D images to predict 3D pose in indoor environments, but to our knowledge, we are the first to tackle estimating 3D pose over time from images and sparse LiDAR points at distance. This setting is important for recovering and simulating realistic humans in the wild for self-driving. Non-rigid Body Surface Reconstruction:For realistic simulation, we need to reconstruct the 3D human mesh in the scene. While real-time mesh reconstruction of non-rigid objects from depth camera [29] or RGB camera [30; 31] exist, to re-articulate the humans for downstream tasks we also require human pose. We now discuss past work that recover both 3D pose and articulated meshes. Most of these work [6; 7; 8; 9; 10] rely on strong shape priors such as SMPL [32]. They either directly regress human model parameters from observations or fit the parametric model to RGB images by minimizing carefully designed energy functions. To further ensure temporal consistency, [11; 12; 13; 14] leverage training signals from videos. [33] align articulated models with free-form deformation on densely sampled point clouds from multiple sensors. We focus on extending the recovery of 3D human pose and shape with small error from partial LiDAR data and images. Sensor simulation of Pedestrians:Prior work [1], simulate pedestrians by first creating artist-designed meshes that are manually rigged and animated, and then performing sensor simulation via graphics engine rendering. While this allows for fine-grained control of pedestrian appearance and behavior, it is a time-consuming and expensive, which does not scale to capture the real-world pedestrian distribution. Efforts have been made to incorporate human avatars into robot simulators such as MORSE [34] for prototyping, data collection and evaluation [35]. This has focused mostly on indoor-environments and with unrealistic rendering. Our work focuses on leveraging data from in-the-wild, where we can automatically capture diverse human appearance, motion, and behavior and directly adapt these assets for realistic sensor simulation. Figure 1: We recover realistic 3D human meshes and poses from sequences of LiDAR and camera readings, which can then be used in sensor simulation for perception algorithm training and testing. Human Model We utilize a Linear Blend Skinning (LBS) model, which we enhanced with both bone scaling and per-vertex deformations to represent how the human body deforms as a function of its pose. We use this enhanced LBS model due to its simplicity and efficient computation, as opposed to higher-order blend skinning methods such as spherical [36] or non-skeleton based deformation methods [37]. Our experiments show that this simple representation outperforms popular human models (e.g., SMPL [32]) in reconstructing 3D shape from sensor data. Furthermore, it proves sufficient for our downstream task of simulating LiDAR data for testing and improving perception algorithms. We now review the LBS model and describe our bone-scaling and per-vertex deformation modifications to handle shape variation and appearance. LBS represents the human body in two parts: a mesh representation and a hierarchical set of interconnected bones, _i.e._, the skeleton. The key idea is that as the skeleton moves, the positions of the mesh's vertices will change, but not their connectivity. Each bone in the skeleton is associated with some portion of the character's visual representation (i.e., set of vertices) in a process called _skinning_. Each mesh vertex has a specific corresponding \"blend weight\" for each skeleton bone. To calculate the final position of a vertex, a transformation matrix is created for each bone which, when applied to the vertex, first puts the vertex in bone space, and then puts it back into mesh space. After applying this transformation to the vertex, it is scaled by its corresponding blend weight. More formally, let the template mesh \\(\\mathbf{V}\\in\\mathbb{R}^{N\\times 3}\\) be the set of \\(N\\) vertices \\(\\mathbf{V}=\\{\\mathbf{v}_{i}\\}_{i=1}^{N}\\) (with oriented normals \\(\\mathcal{N}=\\{\\mathbf{n}_{i}\\}_{i=1}^{N}\\)), and let \\(\\mathbf{W}\\in\\mathbb{R}^{N\\times K}\\) be the set of blend weights1. We represent a skeleton pose with the set of joint rotation matrices \\(\\mathbf{\\Theta}_{i}\\in\\mathbf{SO}(3)\\), one for each joint representing the rotation with respect to its parent in the skeletal tree. While this original LBS formulation is a good approximation of the human skeleton, it cannot model well different human body sizes deviating from the template mesh. To address this, we introduce a learnable scale factor for each bone in the skeleton: where \\(s_{p}\\) denotes the bone length scale factor between the \\(p\\)-th joint and its parent, which we model to be symmetric with respect to the human spine, _e.g._, the left and right arms share the same bone scale factor. We thus traverse the tree and construct the transformation matrix for each joint \\(\\mathbf{T}_{k}(\\mathbf{\\Theta})\\in\\mathbf{SE}(3)\\): Footnote 1: These blend weights can be created for example by diffusing artist-annotated part-segmentations [32]. \\[\\mathbf{T}_{k}(\\mathbf{s},\\mathbf{\\Theta})=\\prod_{p\\in A(k)}\\begin{bmatrix}s_ {p}\\mathbf{\\Theta}_{p}&(\\mathbf{I}-s_{p}\\mathbf{\\Theta}_{p})\\mathbf{j}_{p}\\\\ \\mathbf{0}&1\\end{bmatrix} \\tag{1}\\] where \\(A(k)\\) is the set of joint ancestors of the \\(k\\)-th joint in order, \\(\\mathbf{\\Theta}_{p}\\) is the rotation matrix of the \\(p\\)-th joint wrt its parent, and \\(\\mathbf{j}_{p}\\) is the coordinate of the \\(p\\)-th joint in the template mesh. The coordinate for the \\(i\\)-th vertex can now be computed as a linear combination of the joint transformation matrices and its unique blend weights. However, the template mesh vertices alone cannot handle shape variations. Therefore, following [33], we also add a displacement vector for each vertex. The coordinate for the \\(i\\)-th vertex and the \\(k\\)-th joint in the posed mesh are computed as: \\[\\mathbf{\\bar{v}}_{i}=\\sum_{k=1}^{K}\\mathbf{T}_{k}(\\mathbf{s},\\mathbf{\\Theta}) (\\mathbf{v}_{i}+\\mathbf{n}_{i}d_{i})\\;w_{i,k}+\\mathbf{c}\\;,\\qquad\\mathbf{ \\bar{j}}_{k}=\\mathbf{T}_{k}(\\mathbf{s},\\mathbf{\\Theta})\\mathbf{j}_{k}+\\mathbf{c} \\tag{2}\\] where \\(w_{i,j}\\) is the skinning weight describing the influence of the \\(k\\)-th joint on the \\(i\\)-th vertex in the template shape, and \\(\\mathbf{c}\\in\\mathbb{R}^{3}\\) is the global translation of the root joint. The final posed mesh model is \\[\\mathbf{M}=\\mathcal{M}(\\mathbf{W},\\mathbf{V},\\mathcal{N},\\mathbf{\\Theta}, \\mathbf{c},\\mathbf{s},\\mathbf{D}) \\tag{3}\\] with posed mesh \\(\\mathbf{M}\\), blend weights \\(\\mathbf{W}\\), mesh vertices \\(\\mathbf{V}\\), normals \\(\\mathcal{N}\\), joint angles \\(\\mathbf{\\Theta}\\), root location \\(\\mathbf{c}\\), bone scale factors \\(\\mathbf{s}\\), and per-vertex deformation matrix \\(\\mathbf{D}\\). ## 4 Reconstructing Pedestrians in the Wild We now describe our method, LiDAR for human Mesh Estimation (LiME), for reconstructing pedestrians in the wild. Given a sequence of LiDAR measurements and camera images captured by a self-driving car, as well as 3D bounding boxes enclosing the pedestrians we want to reconstruct, we seek to estimate the pose trajectory (including global motion) and shape of each pedestrian in the scene. We use our modified LBS model \\(\\mathcal{M}\\) defined in Eq. 3 as our human body parameterization. For our reconstructions, the body model's skinning weights \\(\\mathbf{W}\\), template shape \\(\\mathbf{V}\\) and normals \\(\\mathcal{N}\\)are fixed and we infer from data the pose (joint angles \\(\\mathbf{\\Theta}\\), offset \\(\\mathbf{c}\\)) and shape modifications (joint scale factors \\(\\mathbf{s}\\) and deformations \\(\\mathbf{D}\\)). We first use a regression network to predict the initial estimates of \\((\\mathbf{\\Theta},\\mathbf{c},\\mathbf{s},\\mathbf{D})\\) from data. We then perform energy minimization to refine the prediction (see Figure 2). As we do not have ground-truth pose or shape, we use the objective function to self-supervise our network. We now describe the regression network and energy minimization in more detail. ### Sensor Fusion Regression Network Our regression network takes as input the LiDAR and camera image centered and cropped around each pedestrian, and outputs the initial estimate of the body model parameters \\((\\mathbf{\\Theta},\\mathbf{c},\\mathbf{s},\\mathbf{D})\\). Towards this goal, the camera image is fed into a 2D CNN to compute image features. We then apply bilinear interpolation to sample the corresponding image feature for each LiDAR point using geometry and the camera calibration. Finally, each LiDAR point and its concatenated image feature are consumed by a PointNet [38] network to predict the human parameters. Since the regression network has difficulty identifying which direction the human is facing, we follow [39] and run two branches of the network, where the root joint angle is initialized to either face forward (\\(0^{\\circ}\\)) or backward (\\(180^{\\circ}\\)). ### Energy Formulation We define the objective function to capture the fact that our shape should be consistent with the point clouds from the LiDAR measurements (\\(E_{\\text{sim}}\\)) and that the estimated 3D joints should be consistent with the 2D joints estimated from images (\\(E_{\\text{joint}}\\)). We add an additional term, \\(E_{\\text{prior}}\\), to regularize the poses to be natural, and the deformed shape to be smooth and not have large deviations from the mesh template. The full objective function is: \\[E(\\mathbf{\\Theta}_{1:T},\\mathbf{c}_{1:T},\\mathbf{s},\\mathbf{D})=\\sum_{t}\\lambda_{ \\text{sim}}E_{\\text{sim}}(\\mathbf{\\Theta}_{t},\\mathbf{c}_{t},\\mathbf{s},\\mathbf{ D})+\\lambda_{\\text{joint}}E_{\\text{joint}}(\\mathbf{\\Theta}_{t},\\mathbf{c}_{t}, \\mathbf{s})+E_{\\text{prior}}(\\mathbf{\\Theta}_{t},\\mathbf{s},\\mathbf{D}) \\tag{4}\\] where \\(t\\) is the time step in the pedestrian trajectory, and \\(\\mathbf{\\Theta}_{1:T}\\), \\(\\mathbf{c}_{1:T}\\) are the sequence of pose joint angles and root offsets. We next describe how we compute each term. LiDAR Consistency:The LiDAR consistency term encourages the ray-casted point cloud from the estimated mesh \\(M(\\mathbf{\\Theta}_{t},\\mathbf{c}_{t},\\mathbf{s},\\mathbf{D})\\) to match with the real partial point cloud \\(\\mathbf{X}\\) of the pedestrian through the Chamfer loss: \\[E_{\\text{sim}}(\\mathbf{\\Theta}_{t},\\mathbf{c}_{t},\\mathbf{s},\\mathbf{D})=\\frac{1 }{\\left|\\mathbf{X}\\right|}\\sum_{\\mathbf{x}\\in\\mathbf{X}}\\min_{\\mathbf{y}\\in \\mathbf{Y}}\\left\\|\\mathbf{x}-\\mathbf{y}\\right\\|_{2}^{2}+\\frac{1}{\\left| \\mathbf{Y}\\right|}\\sum_{\\mathbf{y}\\in\\mathbf{Y}}\\min_{\\mathbf{x}\\in\\mathbf{X}} \\left\\|\\mathbf{y}-\\mathbf{x}\\right\\|_{2}^{2} \\tag{5}\\] where \\(\\left|\\mathbf{X}\\right|\\) denotes the cardinality of point set \\(\\mathbf{X}\\), and \\(\\mathbf{Y}=\\{y_{1}\\dots y_{n}|y_{i}\\in\\mathbb{R}^{3}\\}\\) is the rendered points from the estimated mesh. Note that this is a differentiable point set distance and we exploit the Moller-Trumbore [40] ray casting algorithm which is differentiable (w.r.t. the mesh vertices) such that the full model can be trained end-to-end. We refer the reader to the supplementary material for details of the ray-caster and its differentiability. When computing \\(E_{\\text{sim}}\\), we take into account objects that occlude the sensor's field-of-view of the pedestrian, thereby ignoring simulated points from the ray-caster that would not appear due to occlusion. Human Joints Consistency:We exploit camera images by first detecting 2D joints using a state-of-the-art 2D pose estimator [41]. We then encourage the projection of the predicted 3D pose to be consistent with the 2D pose estimates: \\[E_{\\text{joint}}(\\mathbf{\\Theta}_{t},\\mathbf{c}_{t},\\mathbf{s})=\\sum_{k\\in B}m_{ k}\\rho(\\pi(\\mathbf{j}_{k},\\mathbf{\\Omega})-p_{k}) \\tag{6}\\] where \\(\\mathbf{j}_{k}\\) is the \\(k\\)-th joint transformed according to Eq. 2, \\(B\\) is the subset of 3D joints that have 2D counterparts, and \\(p_{k}\\) and \\(m_{k}\\) are the corresponding estimated 2D joint and confidence score. \\(\\pi\\) is the Figure 2: LiDAR for human Mesh Estimation, (LiME): Given sensory observations, a sensor fusion regression network predicts the human parameters which minimize the objective function in Eq. 4. We then perform energy minimization over the sequence to obtain an optimized shape and 3D pose. projection function that takes the camera parameters \\(\\mathbf{\\Omega}\\), which are given as cameras of self-driving cars are calibrated, and projects the 3D joint locations onto the image plane. \\(\\rho\\) is the \\(\\sigma^{2}\\)-scaled Geman-McClure robust penalty function defined as \\(\\rho(x)=(x^{2}*\\sigma^{2})/(x^{2}+\\sigma^{2})\\), with \\(\\sigma=100\\). Pose and Shape Priors:We incorporate our prior knowledge of what are reasonable human poses and shapes to be robust to noisy sensor data. For joint angles, we follow [6; 11] and represent the joint angle prior as the negative log-likelihood of a Gaussian Mixture Model (GMM) learned from the CMU Mocap dataset [17]. We also add a bone scale prior that encourages the bone length to be close to a canonical size. The pose prior is: \\[E_{\\text{pose}}(\\mathbf{\\Theta}_{t},\\mathbf{s})=-(\\log(\\sum_{r}^{R}g_{r} \\mathcal{N}(\\mathbf{\\Theta};\\mu_{r},\\mathbf{\\Sigma}_{r})))+\\lambda\\sum_{k}^{K }(\\prod_{p\\in A(k)}s_{p}-1)^{2} \\tag{7}\\] with \\(R=8\\) Gaussians, \\((g_{r},\\mu_{r},\\mathbf{\\Sigma}_{r})\\) the weight, mean and covariance of the \\(p\\)-th Gaussian, and \\(\\prod_{p\\in A(k)}s_{p}\\) the cumulated scale factor for the bone length between the \\(k\\)-th joint and its ancestors. To ensure the deformed mesh still retains most of the mesh template shape and has smoothly-varying and small deformations, we add a Laplacian mesh regularizer [42] and \\(\\ell_{2}\\) regularizer, respectively: \\[E_{\\text{shape}}(\\mathbf{D})=\\sum_{i=1}^{N}\\lVert\\mathcal{L}(\\mathbf{v}_{i}+ \\mathbf{n}_{i}d_{i})-\\mathcal{L}(\\mathbf{v}_{i})\\rVert_{2}^{2}+\\lambda\\sum_{i =1}^{N}d_{i}^{2} \\tag{8}\\] where \\(\\mathbf{v}_{i}\\) and \\(\\mathbf{n}_{i}\\) are the vertex location and normal in the mesh template, \\(d_{i}\\) is the corresponding displacement along the normal direction, and \\(\\mathcal{L}\\) is the Laplace operator. The total prior is: \\[E_{\\text{prior}}(\\mathbf{\\Theta}_{t},\\mathbf{s},\\mathbf{D})=\\lambda_{\\text{ pose}}E_{\\text{pose}}(\\mathbf{\\Theta},\\mathbf{s})+\\lambda_{\\text{shape}}E_{\\text{shape}}( \\mathbf{D}) \\tag{9}\\] ### Learning and Inference Inference:We perform a forward pass for each pedestrian frame and output the initial model parameters. These predictions are then further refined by minimizing the differentiable energy defined in Eq. 4, which obtains the final pose and shape of each pedestrian at each frame. In practice we found that a two step energy minimization works well, where we first optimize \\(\\mathbf{\\Theta}_{1:T}\\), \\(\\mathbf{c}_{1:T}\\), \\(\\mathbf{s}\\) till convergence, and then optimize the deformation variable \\(\\mathbf{D}\\) till convergence. Each subject converges in typically 50 iterations. We adopt the Adam optimizer [43], which ran much faster than a second-order optimizer, to optimize our objective. Please see supplementary for more details. Learning:Since we do not have ground-truth shape or pose for our in the wild setting, we use Eq. 4 (for a single frame) as the loss function to train the network in a self-supervised fashion. As mentioned in Section 4.1 we use two branches with different root initializations, and perform hindsight loss during training [39]. We pass the result with the lower loss to the energy minimization step during inference. Please see supplementary for more details. ## 5 LiDAR Simulation of Pedestrians In order to produce realistic sensor simulation, we first require a virtual world with realistic backgrounds (i.e. roads, buildings, signs) and dynamic objects (e.g., vehicles, pedestrians), as well as a sensor simulation system that has high-fidelity with the real LiDAR. LiDARsim [15] is a LiDAR simulator that uses real data to generate background scenes and vehicle dynamic objects. LiDARsim then places the assets in a scenario (provided by either a labeled snippet, a tracking algorithm, artist-drawn trajectories, or algorithmically) and renders realistic LiDAR at each time step usingboth physics and machine learning. In particular, a neural network is used to enhance the realism of the ray-casted LiDAR by determining which points would not return in the real world (e.g., due to spectral reflections or far distance). We note that this simulator is different from the ray-tracer desribed in Section 4, as LiDARsim has a high-performing ray-tracer to scale to millions of scene elements. This raytracer is also non-differentiable, thus not suited for our reconstruction framework. While LiDARsim provides high-fidelity backgrounds and vehicles, it lacks realistic pedestrians. We now describe how we enhance LiDARsim with the pedestrians reconstructed using LiME. Towards this goal, we first build an asset bank of pedestrian sequences and their corresponding meshes directly from data captured by our self-driving fleet. Since the trajectories and mesh sequences in the asset bank can be quite diverse (walking, running, standing, sitting, etc.), we ease the reuse of the action-specific pose dynamics by clipping each cyclic pedestrian trajectory to consist of a single cycle, where the human poses in the start and end frames are similar. The average action cycle length is 1.5 seconds. Then, for each new query pedestrian trajectory to be simulated, we select a pedestrian moving at a similar speed from the asset bank, adapt it to the new scene, and simulate the LiDAR data with LiDARsim. We now discuss each step in more detail. Our simulation approach works as follows: The user provides a bird's eye view (BEV) 2D trajectory in the scene map as a high level description of the motion to simulate. Note that this trajectory can come from an existing trajectory (recovered from recorded snippets via tracking or labeling), can be drawn by a test engineer, or can be produced algorithmically. In our experiments, we use labeled snippet trajectories as our query trajectories. We then retrieve the asset in the bank which is most similar to this trajectory query. We use velocity as our similarity function (specifically, the asset trajectory whose velocity is consistently within 0.5 m/s of the query trajectory's), as action-specific pose dynamics are specific to particular velocities. We then modify the retrieved asset and retarget it to perform the desired motion. Specifically, we project the query trajectory to the retrieved asset trajectory in BEV, and use Slerp [44] to interpolate the human poses for each time-step in the query trajectory. Note that this modification affects both the joint angles and associated mesh via our skinning model (see Fig. 4). Finally, we use LiDARsim to simulate the scene as seen by the sensor. ## 6 Experimental Evaluation We first evaluate our proposed method for estimating human surface geometry from LiDAR and image sequences. We then show how capturing realistic pedestrian trajectories in the wild enhances simulation environments and improves performance on autonomy tasks such as pedestrian detection. ### Pedestrian Reconstruction from Sparse LiDAR Points We evaluate our model on the 3DPW [26] dataset, which contains 60 sequences (12 validation split) of real world scenarios and 18 different humans, with images and ground-truth pose and complete clothed 3D shape. We place a virtual LiDAR sensor at the camera center and ray-cast the clothed human mesh in the dataset to generate simulated LiDAR points. Given 3DPW real images and synthetic LiDAR, we evaluate our algorithm on estimating pose and shape. We measure the mesh error in cm with the mean Per-Vertex-Error (PVE) and square root of the Chamfer distance (CD) between the vertices of our prediction vs. the ground-truth's. We measure the joint estimation error with the mean Per-Joint-Position-Error (MPJPE) in cm. Ablation on input feature and energy minimization:The effect of using different input features is reported in Table 1. When we fuse image and LiDAR, the reconstruction error is lower than using either feature. Energy minimization further reduces the error. Alternate human model:We study the effect of using different human models in Table 2. The results are reported with running energy minimization included. Using the LBS model alone, we achieve \\(6.49\\) cm mean PVE. With the additional bone scale factors and per-vertex displacement vectors, the PVE is \\(5.78\\) cm, outperforming the SMPL model. \\begin{table} \\begin{tabular}{c c c c} \\hline \\hline & CD & PVE & MPJPE \\\\ \\hline Image & 6.77 & 14.26 & 12.16 \\\\ LiDAR & 4.94 & 11.17 & 9.51 \\\\ Fused & 4.37 & 9.30 & 7.98 \\\\ \\hline Fused + EM & **2.17** & **5.78** & **5.01** \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 1: Effect of input/energy minimization (EM) \\begin{table} \\begin{tabular}{c c c c} \\hline \\hline Human Model & CD & PVE & MPJPE \\\\ \\hline LBS & 2.62 & 6.49 & 5.69 \\\\ SMPL & 2.44 & 6.04 & 5.17 \\\\ LBS + bone scale & 2.38 & 5.97 & 5.19 \\\\ Ours & **2.17** & **5.78** & **5.01** \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 2: Effect of different human model. **Ablation on energy terms:** Results in Table 3 are reported after running energy minimization. Leveraging LiDAR point cloud observations is important to achieving lower Chamfer error. Leveraging 2D joints is important to achieving lower mean PVE and MPJPE, which measure dense and sparse correspondence between our prediction and the ground-truth shape. Each energy term contributes to the final model. **State-of-the-art (SoTA) comparison:** We compare our model with SoTA image-only approaches on the 3DPW [26] test set in Table 4. \"PVE*\" denotes the typically reported mean Per-Vertex-Error between prediction and ground-truth naked shape, while \"PVE\" denotes the mean Per-Vertex-Error between prediction and ground-truth clothed shape, which is more relevant to our task. We note that our approach uses sparse LiDAR, while other SoTA approaches uses ground-truth meshes and 3D poses and mix multiple datasets during training. Figure 5 shows qualitative results. Using LiDAR's sparse depth greatly improves the accuracy of the shape. ### Simulation for Downstream Visual Application We have demonstrated our approach on recovering human pose and shape from 3DPW pedestrian sequences in the wild, and that leveraging LiDAR point clouds in our energy formulation improves reconstruction performance over prior methods. We now leverage diverse and realistic pedestrians for downstream perception tasks. We conduct our experiments on the ATG4D [45] self-driving dataset, which contains diverse scenes across multiple metropolitan cities in North America with bounding box labels annotated for each object in the scene. Each log snippet has 64-beam LiDAR sweeps at \\(10\\) Hz for \\(\\approx 25\\) seconds with corresponding camera images. We use a detector similar to PnPNet [46], which takes as input five consecutive LiDAR sweeps in birds-eye-view (BEV) and outputs 3D bounding boxes for detected vehicles and pedestrians in the scene. More details about the detector can be found in the supplementary material. We use LiME to reconstruct the pedestrian shape and pose trajectory from the ATG4D dataset. LiME accurately captures the geometry compared to the original LiDAR sequence, as seen in Figure 3. To generate the assets in Section 5, we select 211 unique pedestrian trajectory annotations from the ATG4D [45] training split with over 3300 individually posed meshes. Each selected pedestrian trajectory annotation has: (1) visible camera images and \\(70\\%\\) of joint detection score \\(>\\) 0.1; (2) has \\(\\geq 10\\) consecutive frames; (3) has \\(\\geq 100\\) number of LiDAR points per frame; (4) \\(E_{sim}<20\\), \\(E_{joint}<6\\), and \\(E_{joint}<22\\); (5) Forms a complete action cycle (Sec. 5). We then use the method described in Section 5 to generate simulated LiDAR sweeps, as seen in Figure 6. We can place pedestrians in new configurations (bottom panel one), generate occlusion (panel two), sample safety-critical behaviors (looking at phone, panel three), or create group behavior (panel four). We show through data augmentation experiments that training on our simulated LiDAR data improves pedestrian detection performance with limited amounts of real data. \\begin{table} \\begin{tabular}{c c c c} \\hline \\hline Methods & PVE* & PVE & MPJPE \\\\ \\hline HMMR [12] & 13.93 & – & 11.65 \\\\ SPIN [9] & 11.64 & – & 9.69 \\\\ VIBE [14] & 9.91 & – & 8.29 \\\\ Ours & **7.36** & **8.17** & **6.57** \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 4: Evaluation of 3D pose estimation and shape reconstruction on the 3DPW test set. “PVE*” means Per-Vertex-Error when the ground-truth human is naked. Figure 5: Quantitative results of our method on 3DPW [26] dataset. The sensory input consists of camera image and the synthetic LiDAR points. We show our method using both SMPL model and our human model in Section 3, and compare with SPIN [9]. \\begin{table} \\begin{tabular}{c c c c c c} \\hline \\hline \\multicolumn{3}{c}{Objectives} & \\multicolumn{3}{c}{Error} \\\\ \\hline \\(E_{\\text{sim}}\\) & \\(E_{\\text{prior}}\\) & \\(E_{\\text{joint}}\\) & CD & PVE & MPJPE \\\\ \\hline ✓ & & & & 3.40 & 30.47 & 28.36 \\\\ ✓ & ✓ & & & 3.41 & 23.15 & 21.36 \\\\ & ✓ & ✓ & 5.84 & 11.60 & 9.96 \\\\ ✓ & & ✓ & 2.49 & 7.24 & 5.40 \\\\ ✓ & ✓ & ✓ & **2.17** & **5.78** & **5.01** \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 3: The ablation on different objective term for shape reconstruction and 3D joint estimationEvaluating Pedestrian Detector on Simulated Data:We first evaluate the pedestrian detector on our simulated LiDAR data, and compare the result with the one evaluated on real LiDAR data. This indicates how well we can use our simulation for testing the perception system on safety-critical scenarios. To properly evaluate the realism of our simulated LiDAR points, we generate LiDARsim point clouds from the ground-truth scene layouts. The pedestrian detector was trained on real LiDAR data only. We evaluate the average precision (AP) of the detector for the pedestrian class at two IoU thresholds: \\(0.3\\) and \\(0.5\\). As seen in Table 5, our method has a small gap of 0.7 points at IoU 0.5. This means we can directly use it with little to no domain adaptation to evaluate autonomy systems. Training Data Augmentation with Simulated Data:We train the detector on varying amounts of real LiDAR data and show how performance changes when we augment the dataset with 100k examples of simulated LiDAR data containing vehicles and pedestrians. We report the results in Table 7. Note that to strictly evaluate the realism of the sensor data, the pedestrian layout and trajectory in the 100k simulated examples are different from that in the 100k real examples. When we combine simulated LiDAR data with real data, we consistently get performance gains, especially when we only have limited real data. Moreover, when we combine both large amounts of real and simulation data (100k examples each), we get about \\(3\\) AP point improvement over real data alone. As shown in Table 6, if 100k real LiDAR examples and 100k simulated LiDAR examples use the same scene layout and pedestrian trajectory, we get \\(1.7\\) AP point improvement over real data alone, highlighting the value of simulating diverse pedestrian LiDAR sequences even with the same layout. ## 7 Conclusion In this paper, we propose to leverage LiDAR and camera images collected by self-driving cars driving around a city to generate diverse pedestrian shapes and motions at scale, which we then use for accurate simulation to test and train a state-of-the-art perception system. Towards this goal, we have designed a deep-structured model, LiME, to reconstruct pedestrians in the wild using image and LiDAR sequences. We then perform motion retargeting and pedestrian scenario simulation in urban scenes to generate realistic LiDAR data. Our results show that the generated LiDAR point clouds have little domain gap and enhance the performance of downstream detectors via data augmentation. In the future we plan to reconstruct and simulate other categories such as animals. \\begin{table} \\begin{tabular}{c c c} \\hline \\hline Train data (100k) & AP\\({}_{0.3}\\) & AP\\({}_{0.5}\\) \\\\ \\hline Real & 72.0 & 66.8 \\\\ Real + Sim & 73.6 & 68.5 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 6: Trained on 100k real and 100k sim (same layout) and evaluated on real data. \\begin{table} \\begin{tabular}{c c c c c} \\hline \\hline Eval data & AP\\({}_{0.3}\\) & AP\\({}_{0.5}\\) \\\\ \\hline Real & 72.0 & 66.8 \\\\ Sim & 67.8 & 66.1 \\\\ \\hline \\hline \\end{tabular} \\begin{tabular}{c c c c} \\hline \\hline Real Amount & \\multicolumn{2}{c}{Real Only} & \\multicolumn{2}{c}{Real+100k Sim} \\\\ \\cline{2-5} & AP\\({}_{0.3}\\) & AP\\({}_{0.5}\\) & AP\\({}_{0.3}\\) & AP\\({}_{0.5}\\) \\\\ \\hline 0k & – & – & 66.9 & 61.6 \\\\ 5k & 30.9 & 27.5 & 68.7 & 63.2 \\\\ 10k & 40.2 & 36.6 & 69.4 & 64.3 \\\\ 20k & 53.2 & 48.6 & 70.4 & 65.4 \\\\ 50k & 67.4 & 62.7 & 73.4 & 68.5 \\\\ 100k & 72.0 & 66.8 & 74.9 & 69.9 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 7: Training with simulated data boosts pedestrian detection performance. Figure 6: **Top Left:** reconstructed scene. **Top Right:** simulated LiDAR and pedestrian detections (green box). Detector trained on real-data only. **Bottom:** Reconstructions and simulated LiDAR. ## References * [1] A. Dosovitskiy, G. Ros, F. Codevilla, A. Lopez, and V. Koltun. CARLA: An open urban driving simulator. In _CoRL_, 2017. * [2] C. Ionescu, D. Papava, V. Olaru, and C. Sminchisescu. Human3.6m: Large scale datasets and predictive methods for 3d human sensing in natural environments. _PAMI_, 2014. * [3] F. Bogo, J. Romero, M. Loper, and M. J. Black. Faust: Dataset and evaluation for 3d mesh registration. In _CVPR_, 2014. * [4] L. Sigal, A. O. Balan, and M. J. Black. Humaneva: Synchronized video and motion capture dataset and baseline algorithm for evaluation of articulated human motion. _IJCV_, 2010. * [5] M. Trumble, A. Gilbert, C. Malleson, A. Hilton, and J. Collomosse. Total capture: 3d human pose estimation fusing video and inertial sensors. In _BMVC_, 2017. * [6] F. Bogo, A. Kanazawa, C. Lassner, P. Gehler, J. Romero, and M. J. Black. Keep it smpl: Automatic estimation of 3d human pose and shape from a single image. In _ECCV_, 2016. * [7] A. O. Balan, L. Sigal, M. J. Black, J. E. Davis, and H. W. Haussecker. Detailed human shape and pose from images. In _CVPR_, 2007. * [8] A. Kanazawa, M. J. Black, D. W. Jacobs, and J. Malik. End-to-end recovery of human shape and pose. In _CVPR_, 2018. * [9] N. Kolotouros, G. Pavlakos, M. J. Black, and K. Daniilidis. Learning to reconstruct 3d human pose and shape via model-fitting in the loop. In _ICCV_, 2019. * [10] T. Alldieck, G. Pons-Moll, C. Theobalt, and M. Magnor. Tex2shape: Detailed full human body geometry from a single image. In _ICCV_, 2019. * [11] A. Arnab, C. Doersch, and A. Zisserman. Exploiting temporal context for 3d human pose estimation in the wild. In _CVPR_, 2019. * [12] A. Kanazawa, J. Y. Zhang, P. Felsen, and J. Malik. Learning 3d human dynamics from video. In _CVPR_, 2019. * [13] T. Alldieck, M. Magnor, W. Xu, C. Theobalt, and G. Pons-Moll. Video based reconstruction of 3d people models. In _CVPR_, 2018. * [14] M. Kocabas, N. Athanasiou, and M. J. Black. Vibe: Video inference for human body pose and shape estimation. In _CVPR_, 2020. * [15] S. Manivasagam, S. Wang, K. Wong, W. Zeng, M. Sazanovich, S. Tan, B. Yang, W.-C. Ma, and R. Urtasun. Lidarsim: Realistic lidar simulation by leveraging the real world. In _CVPR_, 2020. * [16] H. Joo, T. Simon, and Y. Sheikh. Total capture: A 3d deformation model for tracking faces, hands, and bodies. In _CVPR_, 2018. * [17] CMU. Carnegie-mellon mocap database. URL [http://mocap.cs.cmu.edu/](http://mocap.cs.cmu.edu/). * [18] A. Agarwal and B. Triggs. Recovering 3d human pose from monocular images. _PAMI_, 2005. * [19] J. Martinez, R. Hossain, J. Romero, and J. J. Little. A simple yet effective baseline for 3d human pose estimation. In _ICCV_, 2017. * [20] G. Pavlakos, X. Zhou, and K. Daniilidis. Ordinal depth supervision for 3d human pose estimation. In _CVPR_, 2018. * [21] L. Sigal, M. Isard, H. Haussecker, and M. J. Black. Loose-limbed people: Estimating 3d human pose and motion using non-parametric belief propagation. _IJCV_, 2012. * [22] D. Pavllo, C. Feichtenhofer, D. Grangier, and M. Auli. 3d human pose estimation in video with temporal convolutions and semi-supervised training. In _CVPR_, 2019. * [23] B. Tekin, A. Rozantsev, V. Lepetit, and P. Fua. Direct prediction of 3d body poses from motion compensated sequences. In _CVPR_, 2016. * [24] T. von Marcard, R. Henschel, M. J. Black, B. Rosenhahn, and G. Pons-Moll. Recovering accurate 3d human pose in the wild using imus and a moving camera. In _ECCV_, 2018. * [25] N. Saini, E. Price, R. Tallamraju, R. Enficiaud, R. Ludwig, I. Martinovic, A. Ahmad, and M. J. Black. Markerless outdoor human motion capture using multiple autonomous micro aerial vehicles. In _ICCV_, 2019. * [26] T. von Marcard, R. Henschel, M. Black, B. Rosenhahn, and G. Pons-Moll. Recovering accurate 3d human pose in the wild using imus and a moving camera. In _ECCV_, 2018. * [27] F. Bogo, M. J. Black, M. Loper, and J. Romero. Detailed full-body reconstructions of moving people from monocular rgb-d sequences. In _ICCV_, 2015. * [28] C. Zimmermann, T. Welschehold, C. Dornhege, W. Burgard, and T. Brox. 3d human pose estimation in rgbd images for robotic task learning. In _ICRA_, 2018. * [29] R. A. Newcombe, D. Fox, and S. M. Seitz. Dynamicfusion: Reconstruction and tracking of non-rigid scenes in real-time. In _CVPR_, 2015. * [30] S. Saito,, Z. Huang, R. Natsume, S. Morishima, A. Kanazawa, and H. Li. Pifu: Pixel-aligned implicit function for high-resolution clothed human digitization. _arXiv_, 2019. * [31] Z. Zheng, T. Yu, Y. Wei, Q. Dai, and Y. Liu. Deephuman: 3d human reconstruction from a single image. _arXiv_, 2019. * [32] M. Loper, N. Mahmood, J. Romero, G. Pons-Moll, and M. J. Black. Smpl: A skinned multi-person linear model. _TOG_, 2015. * [33] C.-L. Li, T. Simon, J. Saragih, B. Poczos, and Y. Sheikh. Lbs autoencoder: Self-supervised fitting of articulated meshes to point clouds. In _CVPR_, 2019. * [34] G. Echeverria, N. Lassabe, A. Degroote, and S. Lemaignan. Modular open robots simulation engine: Morse. In _ICRA_, 2011. * recent perspectives with the morse simulator. 2014. * [36] L. Kavan and J. Zara. Spherical blend skinning: a real-time deformation of articulated models. In _I3D_, 2005. * [37] P. Joshi, M. Meyer, T. DeRose, B. Green, and T. Sanocki. Harmonic coordinates for character articulation. _TOG_, 2007. * [38] W. Yuan, T. Khot, D. Held, C. Mertz, and M. Hebert. Pcn: Point completion network. In _3DV_, 2018. * [39] E. Insafutdinov and A. Dosovitskiy. Unsupervised learning of shape and pose with differentiable point clouds. In _NIPS_, 2018. * [40] T. Moller and B. Trumbore. Fast, minimum storage ray/triangle intersection. In _ACM SIGGRAPH 2005 Courses_, 2005. * [41] Y. Wu, A. Kirillov, F. Massa, W.-Y. Lo, and R. Girshick. Detectron2, 2019. * [42] O. Sorkine, D. Cohen-Or, Y. Lipman, M. Alexa, C. Rossl, and H.-P. Seidel. Laplacian surface editing. In _Eurographics/ACM SIGGRAPH symposium on Geometry processing_, 2004. * [43] D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. _arXiv_, 2014. * [44] K. Shoemake. Animating rotation with quaternion curves. In _SIGGRAPH_, 1985. * [45] B. Yang, W. Luo, and R. Urtasun. Pixor: Real-time 3d object detection from point clouds. In _CVPR_, 2018. * [46] M. Liang, B. Yang, W. Zeng, Y. Chen, R. Hu, S. Casas, and R. Urtasun. Pnpnet: End-to-end perception and prediction with tracking in the loop. In _CVPR_, 2020. ## Appendix In this supplementary, we cover additional details and analysis of our method for recovering and simulating pedestrians in the wild. In Section A1 we provide details about our obstacle-aware ray-tracer that allows us to incorporate LiDAR observations to improve 3D pose and shape reconstruction, and we discuss the differentiability of our ray-tracer. Then in Section A2 we provide more details about our learning and inference pipelines. Finally in Section A3 we provide the details of our pedestrian detector used in the experiments. Additionally, please see our supplementary video, which showcases (1) Motivation and our methodology overview of LiME (LiDAR for human Mesh Estimation); (2) Human shape and pose reconstruction results using LiME on our real-world data, demonstrating the diversity of pedestrians we recover; (3) Application of our pedestrian asset bank for downstream evaluation of perception algorithms trained only on real data; and (4) Demonstration of our method for training perception algorithms by showing a side-by-side comparison of a detector trained on either simulated or real data and evaluated on real data. ### Details of our Obstacle-aware Differentiable Ray-tracer As described in the main paper, real LiDAR point cloud observations of pedestrians in the wild are sparse (due to distance and LiDAR resolution) and partial (due to occlusions of other objects). The LiDAR sensor can be approximated via ray casting, where each laser ray shot by the sensor is parameterized in spherical coordinates \\((r,\\phi,\\theta)\\), representing the radius (distance travelled), azimuth, and elevation of the ray. We therefore design our ray-tracer to generate synthetic point clouds that better match the real ones by using the same LiDAR resolution when generating the ray-casted rays and removing ray-casted rays that hit occluded objects, which we can infer based on the real LiDAR point cloud. Ray-casting algorithm:Given the pedestrian LiDAR point cloud and LiDAR sensor location, we first compute the radius, azimuth, and elevation ranges of the rays that might hit the pedestrian as \\(\\{r_{\\text{min}},r_{\\text{max}}\\}\\), \\(\\{\\phi_{\\text{min}},\\phi_{\\text{max}}\\}\\) and \\(\\{\\theta_{\\text{min}},\\theta_{\\text{max}}\\}\\). We determine these values based on the 3D bounding box enclosing the pedestrian LiDAR point cloud. We then compute the set of rays within the azimuth and elevation range according to the resolution of LiDAR sensor \\((d_{\\phi},d_{\\theta})\\): \\[\\mathcal{R}=\\left\\{(i*d_{\\phi},j*d_{\\theta})\\Big{|}\\,\\big{|}\\,\\frac{\\phi_{ \\text{min}}}{d_{\\phi}}\\,\\big{]}<i<\\lfloor\\frac{\\phi_{\\text{max}}}{d_{\\phi}} \\rfloor,\\lfloor\\frac{\\theta_{\\text{min}}}{d_{\\theta}}\\rfloor<j<\\lfloor\\frac{ \\phi_{\\text{max}}}{d_{\\phi}}\\rfloor\\right\\} \\tag{10}\\] where \\(\\lfloor\\cdot\\rfloor\\) is the floor function, and \\(i,j\\) are integers. For each ray \\(\\mathbf{r}=(i*d_{\\phi},j*d_{\\theta})\\in\\mathcal{R}\\), \\(i*d_{\\phi}\\) is the azimuth of the ray and \\(j*d_{\\theta}\\) is the elevation of the ray. For simplicity, we assume the centre of the ray-caster is at origin \\(\\mathbf{o}\\). We then cast the set of rays \\(\\mathcal{R}\\) into the reconstructed mesh using the Moller-Trumbore [40] ray casting algorithm. Moller-Trumbore ray casting efficiently computes the ray-triangle intersection for each triangle in the mesh by converting the representation of the intersection point \\(\\mathbf{p}\\) from cartesian coordinates to the Barycentric coordinates of the triangle of interest. We define the cartesian coordinate of the intersection point as \\(\\mathbf{p_{cart}}=\\mathbf{o}+c\\ \\mathbf{d}\\), where \\(\\mathbf{o}\\) and \\(\\mathbf{d}\\) are the origin and direction of the raycasted ray in \\((x,y,z)\\) cartesian coordinate space, and \\(c\\) is the distance travelled. For a triangle face \\(\\mathbf{f}\\) with vertices \\((\\mathbf{v}_{1},\\mathbf{v}_{2},\\mathbf{v}_{3})\\), we define \\(\\mathbf{e}_{1}=\\mathbf{v}_{2}-\\mathbf{v}_{1},\\mathbf{e}_{2}=\\mathbf{v}_{3}- \\mathbf{v}_{2}\\) and \\(\\mathbf{t}=\\mathbf{o}-\\mathbf{v}_{1}\\). The Barycentric coordinates of the intersection point \\((u,v)\\) are obtained by solving: \\[\\begin{bmatrix}c\\\\ u\\\\ v\\end{bmatrix}=\\frac{1}{(\\mathbf{d}\\times\\mathbf{e}_{2})\\cdot\\mathbf{e}_{1}} \\begin{bmatrix}(\\mathbf{t}\\times\\mathbf{e}_{1})\\cdot\\mathbf{e}_{2}\\\\ (\\mathbf{d}\\times\\mathbf{e}_{2})\\cdot\\mathbf{t}\\\\ (\\mathbf{t}\\times\\mathbf{e}_{1})\\cdot\\mathbf{d}\\end{bmatrix} \\tag{11}\\] where \\(\\times\\) is the cross product operator and \\(\\cdot\\) is the inner product operator between two vectors. If the intersection point lies inside the triangle, we can convert the intersection point back to cartesian coordinates as \\(\\mathbf{y}=\\mathbf{v}_{1}+u\\mathbf{e}_{1}+v\\mathbf{e}_{2}\\). Note that if the ray intersect with multiple triangle faces, we choose the ray-casted point with **minimum** distance to the ray-caster origin. The ray-casted points on the mesh form the set \\(\\mathbf{Y}\\) (Eq. 5 in the main paper). Occlusion-aware ray-caster:Directly using a ray-tracer to generate a syntethic point cloud will not match well with the observed LiDAR points if the set of rays \\(\\mathcal{R}\\) hit occluded objects in the real LiDAR scene. Not accounting for these occlusions will incorrectly penalize the posed mesh to not have points generated in these regions not visible to the real LiDAR sensor. To account for occlusions, we first define an occluded object as an object in front of sensor path to the bounding box enclosing the pedestrian LiDAR points. Note that we do not account for occlusion inside the bounding box. Then the set of points in the real LiDAR scan forming the occlusion is: \\[\\mathbf{O}=\\{\\mathbf{p}\\mid r_{\\mathbf{p}}<r_{\\text{min}},\\phi_{\\text{min}}< \\phi_{\\mathbf{p}}<\\phi_{\\text{max}},\\theta_{\\text{min}}<\\theta_{\\mathbf{p}}< \\theta_{\\text{max}}\\} \\tag{12}\\] where \\(r_{\\mathbf{p}},\\phi_{\\mathbf{p}},\\theta_{\\mathbf{p}}\\) are the radius, azimuth and elevation of point \\(\\mathbf{p}\\), respectively. We determine the rays in \\(\\mathcal{R}\\) that hit the occlusion \\(\\mathbf{O}\\) as: \\[\\mathcal{O}=\\left\\{\\left(\\lfloor\\frac{\\phi_{\\mathbf{p}}}{d_{\\phi}}\\rfloor, \\lfloor\\frac{\\theta_{\\mathbf{p}}}{d_{\\theta}}\\rfloor\\right)\\right|\\mathbf{p} \\in\\mathbf{O}\\right\\} \\tag{13}\\] We then mask out occluded rays \\(\\mathcal{O}\\) from \\(\\mathcal{R}\\) and use the set of rays \\(\\mathcal{R}\\setminus\\mathcal{O}\\) to generate the ray-casted points \\(\\mathbf{Y}\\), and compute \\(E_{\\text{sim}}\\) (Eq. 5 in the main paper): \\[E_{\\text{sim}}(\\mathbf{\\Theta}_{t},\\mathbf{c}_{t},\\mathbf{s},\\mathbf{D})= \\frac{1}{\\left|\\mathbf{X}\\right|}\\sum_{\\mathbf{x}\\in\\mathbf{X}}\\min_{\\mathbf{y }\\in\\mathbf{Y}}\\left\\|\\mathbf{x}-\\mathbf{y}\\right\\|_{2}^{2}+\\frac{1}{\\left| \\mathbf{Y}\\right|}\\sum_{\\mathbf{y}\\in\\mathbf{Y}}\\min_{\\mathbf{x}\\in\\mathbf{X} }\\left\\|\\mathbf{y}-\\mathbf{x}\\right\\|_{2}^{2} \\tag{14}\\] See Figure 7 for a visual explanation. Differentiability of the ray-tracer.Since the coordinate of the ray-casted point is: \\[\\mathbf{y} =\\mathbf{v}_{1}+u\\mathbf{e}_{1}+v\\mathbf{e}_{2}\\] \\[=\\mathbf{v}_{1}+u(\\mathbf{v}_{2}-\\mathbf{v}_{1})+v(\\mathbf{v}_{3 }-\\mathbf{v}_{2})\\] \\[=(1-u)\\mathbf{v}_{1}+(u-v)\\mathbf{v}_{2}+v\\mathbf{v}_{3} \\tag{15}\\] which is a linear combination of the vertices \\(\\mathbf{v}_{1},\\mathbf{v}_{2},\\mathbf{v}_{3}\\) in the mesh. The LiDAR ray-tracer is differentiable with respect to the mesh vertices. Although this ray-caster is not differentiable with respect to which triangle it intersects, empirically we find this works well in practice, as we have additional energy terms such as the 2D-joints consistency from the image to provide key-points correspondence supervision, and shape and pose priors to help with the shape and pose estimates. ### Learning and Inference Details In our Sensor Fusion Network, we use ResNet-50 as the 2D CNN backbone, and we use the Point Completion Network [38] as the Point Cloud feature extractor. To learn the neural network, we use batch size of 16, Adam optimizer with learning rate \\(1e-4\\). And we train the network for \\(50000\\) iterations. When we perform energy minimization, we use the Adam optimizer with learning rate of \\(1e-2\\). The weight for simulation, joints, pose prior, bone scale prior, L2 smoothness and Laplacian term are \\(144^{2}\\), \\(0.2^{2}\\), \\(0.478^{2}\\), \\(2^{2}\\), \\(100^{2}\\), \\(1000^{2}\\), respectively. Figure 7: We determine the rays to be casted using the bounding box enclosing the object LiDAR points (blue), and we mask out the rays that hit obstacles (orange). We use the remaining rays to compute the ray-casted points. Pedestrian Detector Details We use the object detector similar to [46], it takes as input five consecutive LiDAR sweeps (0.5s) in birds-eye-view (BEV). The LiDAR data uses a voxel based representation in BEV, and the five consecutive sweeps are combined by concatenating along the height dimension (with the ego motion compensated for the previous sweeps). Each instance label box includes at least one pedestrian LiDAR point. Given the aforementioned BEV representation of the LiDAR as input, the network first down-sample the input BEV image by a factor of 4 using three Conv2D layers. Then a cross-scale module [46] was applied three times sequentially. Next, a FPN was applied to fuse multi-scale feature maps, resulting in a 4x down-sampled BEV feature map. Finally we use 4 Conv2D layers to generate 3D bounding box prediction.
Sensor simulation is a key component for testing the performance of self-driving vehicles and for data augmentation to better train perception systems. Typical approaches rely on artists to create both 3D assets and their animations to generate a new scenario. This, however, does not scale. In contrast, we propose to recover the shape and motion of pedestrians from sensor readings captured in the wild by a self-driving car driving around. Towards this goal, we formulate the problem as energy minimization in a deep structured model that exploits human shape priors, reprojection consistency with 2D poses extracted from images, and a ray-caster that encourages the reconstructed mesh to agree with the LiDAR readings. Importantly, we do not require any ground-truth 3D scans or 3D pose annotations. We then incorporate the reconstructed pedestrian assets bank in a realistic LiDAR simulation system by performing motion retargeting, and show that the simulated LiDAR data can be used to significantly reduce the amount of annotated real-world data required for visual perception tasks. Pedestrian Reconstruction, Pedestrian LiDAR Simulation
Provide a brief summary of the text.
222
copernicus/e51ab409_d34a_46f8_8611_0cf9506bd0c3.md
"Geosci. Instrum. Method. Data Syst., 1, 111-134, 2012\n\nwww.geosci-instrum-method-data-syst.net/1/111/2012/\n\ndoi:10.5194/gi-1-111-2012\n\nO Author(s) 2012. CC Attribution 3.0 License.\n\nThe GPlates Geological Information Model and Markup Language\n\nX. Qin\\({}^{1}\\), R. D. Muller\\({}^{1}\\), J. Cannon\\({}^{1}\\), T. C. W. Landgrebe\\({}^{1}\\), C. Heine\\({}^{1}\\), R. J. Watson\\({}^{2}\\), and M. Turner\\({}^{3}\\)\n\n\\({}^{1}\\)EarthByte Group, School of Geosciences, University of Sydney, Sydney, NSW 2006, Australia\n\n\\({}^{2}\\)Geodynamics Team, Geological Survey of Norway, P.O. Box 6315, Sluppen, 7491 Trondheim, Norway\n\n\\({}^{3}\\)Seismological Laboratory, California Institute of Technology, Pasadena, California, USA\n\nX. Qin ([email protected])\n\nReceived: 18 May 2012 - Published in Geosci. Instrum. Method. Data Syst. Discuss.: 4 July 2012\n\n10 September 2012 - Accepted: 10 September 2012 - Published: 8 October 2012\n\nUnderstanding tectonic and geodynamic processes leading to the present-day configuration of the Earth involves studying data and models across a variety of disciplines, from geochemistry, geochronology and geophysics, to plate kinematics and mantle dynamics. All these data represent a 3-D spatial and 1-D temporal framework, a formalism which is not exploited by traditional spatial analysis tools. This is arguably a fundamental limit in both the rigour and sophistication in which datasets can be combined for geological deep time analysis, and often confines the extent of data analyses to the present-day configurations of geological objects. The GPlates Geological Information Model (GPGIM) represents a formal specification of geological and geophysical data in a time-varying plate tectonics context, used by the GPlates virtual-globe software. It provides a framework in which relevant types of geological data are attached to a common plate tectonic reference frame, allowing the data to be reconstructed in a time-dependent spatio-temporal plate reference frame. The GPlates Markup Language (GPML), being an extension of the open standard Geography Markup Language (GML), is both the modelling language for the GPGIM and an XML-based data format for the interoperable storage and exchange of data modelled by it. The GPlates software implements the GPGIM allowing researchers to query, visualise, reconstruct and analyse a rich set of geological data including numerical raster data. The GPGIM has recently been extended to support time-dependent geo-referenced numerical raster data by wrapping GML primitives into the time-dependent framework of the GPGIM. Coupled with GPlates' ability to reconstruct numerical raster data and import/export from/to a variety of raster file formats, as well as its handling of time-dependent plate boundary topologies, interoperability with geodynamic softwares is established, leading to a new generation of deep-time spatio-temporal data analysis and modelling, including a variety of new functionalities, such as 4-D data-mining.\n\n## References\n\n* Abadi et al. (1995) Abadi, M., Cardelli, L., Pierce, B. C., and Remy, D.: Dynamic typing in polymorphic languages, J. Funct. Programm., 5, 111-130, 1995.\n* Boag et al. (2010) Boag, S., Chamberlin, D., Fernandez, M. F., Florescu, D., Robie, J. and Simeon, J. (Eds.): XQuery 1.0: An XML query language, 2nd Edn., W3C Recommendation, available from: [http://www.w3.org/TR/xquery/](http://www.w3.org/TR/xquery/) (last access: 5 October 2012), 2010.\n* Bonham-Carter (1994) Bonham-Carter, G.: Geographic information systems for geoscientists: modelling with GIS, Pergamon Press, 1994.\n* Library use of a microcomputer database management system, Program: electronic library and information systems, MCB UP Ltd, 18, doi:10.1108/eb046876, 157-165, 1984.\n* Boyden et al. (2011) Boyden, J. A., Muller, R. D., Gurnis, M., Torsvik, T. H., Clark, J. A., Turner, M., Ivey-Law, H., Watson, R. J., and Cannon, J. S.: Next-generation plate-tectonic reconstructions using GPlates, in: Geoinformatics: Cyberinfrastructure for the Solid Earth Sciences, edited by: Keller, G. R. and Baru, C., Cambridge University Press, 2011.\n* Bray et al. (1997) Bray, T., Paoli, J., Sperberg-McQueen, C. M., Maler, E., and Yergeau, F.: Extensible markup language (XML), World Wide Web J., 2, 27-66, 1997.\n* Cox & Hart (1986) Cox, A. and Hart, B. R.: Plate Tectonics: How It Works, Blackwell Science Inc., 400 pp., 1986.\n* Demsar et al. (2004) Demsar, J., Zupan, B., Leban, G., and Curk, T.: Orange: From experimental machine learning to interactive data mining, European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, Pisa, Italy, 2004.\n* ESRI (1998) ESRI: ESRI Shapefile Technical Description, available from: [http://www.esri.com/library/whitepapers/pdfs/shapefile.pdf](http://www.esri.com/library/whitepapers/pdfs/shapefile.pdf) (last access: 8 May 2012), 1998.\n* Euler (2012) Euler, L. and Sten, J.: Euler's original text (in Latin) and English translation, available from: [http://www.17centurymaths.com/contents/euler/e478tr.pdf](http://www.17centurymaths.com/contents/euler/e478tr.pdf), last access: 8 May 2012.\n* Geraci et al. (1991) Geraci, A., Katki, F., McMenegel, L., Meyer, B., Lane, J., Wilson, P., Radatz, J., Yee, M., Porteous, H., and Springsteel, F.: IEEE Standard Computer Dictionary: Compilation of IEEE Standard Computer Glossaries, IEEE Press Piscataway, NJ, USA, 1991.\n* Greiner (1999) Greiner, B.: Euler rotations in plate-tectonic reconstructions, Comput. Geosci., 25, 209-216, 1999.\n* Gurnis et al. (2012) Gurnis, M., Turner, M., Zahriovic, S., Dicaprio, L., Spasojevich, S., Muller, R. D., Boyden, J., Seton, M., Manea, V. C., and Bower, D. J.: Plate Tectonic Reconstructions with Continuously Closing Plates, Comput. Geosci., 38, 35-42, doi:10.1016/j.cageo.2011.04.014, 2012.\n* Heim (2007) Heim, M.: Exploring Indiana Highways: Trip Trivia, Exploring America's Highway, Travel Organization Network Exchange, Inc., Wabasha, 2007.\n* Hellinger (1981) Hellinger, S. J.: The uncertainties of finite rotations in plate tectonics, J. Geophys. Res., 86, 9312-9318, 1981.\n* Reference model, 2002.\n* Schema for coverage geometry and functions, 2005.\n* Lake (2005) Lake, R.: The application of geography markup language (GML) to the geological sciences, Comput. Geosci., 31, 1081-1094, 2005.\n* Landgrebe & Muller (2011) Landgrebe, T. C. W. and Muller, R. D.: A Spatio-Temporal Knowledge-Discovery Platform for Earth-Science Data, Digital Image Computing Techniques and Applications (DICTA), 2011 International Conference on 6-8 December 2011, Noosa, QLD, Australia, 394-399, doi:10.1109/DICTA.2011.73, 2011.\n* Larman (2004) Larman, C.: Applying UML and patterns: an introduction to object-oriented analysis and design and iterative development, Prentice Hall PTR, 2004.\n* Lee (2012) Lee, Y. T.: Information modeling: From design to implementation, available from: [http://www.mel.nist.gov/msidlibrary/doc/tina99im.pdf](http://www.mel.nist.gov/msidlibrary/doc/tina99im.pdf) (last access: 29 June 2012), 1999.\n* Muller et al. (2008) Muller, R. D., Sdrolias, M., Gaina, C., and Roest, W. R.: Age, spreading rates and spreading asymmetry of the world's ocean crust, Geochem. Geophy. Geosy., 9, Q04006, doi:04010.01029/02007GC001743 2008.\n* OGC (2010) OGC: Network Common Data Form (NetCDF) Core Encoding Standard version 1.0, available from: [http://www.opengeospatial.org/standards/netd](http://www.opengeospatial.org/standards/netd) (last access: 6 September 2012), 2010.\n* Peng & Zhang (2004) Peng, Z. R. and Zhang, C.: The roles of geography markup language (GML), scalable vector graphics (SVG), and Web feature service (WFS) specifications in the development of Internet geographic information systems (GIS), J. Geogr. Syst., 6, 95-116, 2004.\n* Portele (2007) Portele, C. (Ed.): Geography Markup Language (GML) Encoding Standard v3.2, OGC Implementation Standard, OGC document 07-036, [http://www.opengis.net/doc/gml](http://www.opengis.net/doc/gml), last access: 5 October 2012, also published as ISO 19136:2007, 2007.\n* Sen & Duffy (2005) Sen, M. and Duffy, T.: GeoSciML: Development of a generic GeoScience Markup Language, Comput. Geosci., 31, 1095-1103, 2005.\n* Simons et al. (2006) Simons, B., Boisvert, E., Brodaric, B., Cox, S., Duffy, T. R., Johnson, B. R., Lavxton, J. L., and Richard, S.: GeoSciML: enabling the exchange of geological map data, ASEG Extended Abstracts, CSIRO Publishing, Collingwood, Victoria, Australia, 1-4, 2006.\n\n## References\n\n* Abadi et al. (1995) Abadi, M., Cardelli, L., Pierce, B. C., and Remy, D.: Dynamic typing in polymorphic languages, J. Funct. Programm., 5, 111-130, 1995.\n* Boag et al. (2010) Boag, S., Chamberlin, D., Fernandez, M. F., Florescu, D., Robie, J. and Simeon, J. (Eds.): XQuery 1.0: An XML query language, 2nd Edn., W3C Recommendation, available from: [http://www.w3.org/TR/xquery/](http://www.w3.org/TR/xquery/) (last access: 5 October 2012), 2010.\n* Bonham-Carter (1994) Bonham-Carter, G.: Geographic information systems for geoscientists: modelling with GIS, Pergamon Press, 1994.\n* Library use of a microcomputer database management system, Program: electronic library and information systems, MCB UP Ltd, 18, doi:10.1108/eb046876, 157-165, 1984.\n* Boyden et al. (2011) Boyden, J. A., Muller, R. D., Gurnis, M., Torsvik, T. H., Clark, J. A., Turner, M., Ivey-Law, H., Watson, R. J., and Cannon, J. S.: Next-generation plate-tectonic reconstructions using GPlates, in: Geoinformatics: Cyberinfrastructure for the Solid Earth Sciences, edited by: Keller, G. R. and Baru, C., Cambridge University Press, 2011.\n* Bray et al. (1997) Bray, T., Paoli, J., Sperberg-McQueen, C. M., Maler, E., and Yergeau, F.: Extensible markup language (XML), World Wide Web J., 2, 27-66, 1997.\n* Cox & Hart (1986) Cox, A. and Hart, B. R.: Plate Tectonics: How It Works, Blackwell Science Inc., 400 pp., 1986.\n* Demsar et al. (2004) Demsar, J., Zupan, B., Leban, G., and Curk, T.: Orange: From experimental machine learning to interactive data mining, European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, Pisa, Italy, 2004.\n* ESRI (1998) ESRI: ESRI Shapefile Technical Description, available from: [http://www.esri.com/library/whitepapers/pdfs/shapefile.pdf](http://www.esri.com/library/whitepapers/pdfs/shapefile.pdf) (last access: 8 May 2012), 1998.\n* Euler (2012) Euler, L. and Stern, J.: Euler's original text (in Latin) and English translation, available from: [http://www.17centurymaths.com/contents/euler/e478tr.pdf](http://www.17centurymaths.com/contents/euler/e478tr.pdf), last access: 8 May 2012.\n* Geraci et al. (1991) Geraci, A., Katki, F., McMenegel, L., Meyer, B., Lane, J., Wilson, P., Radatz, J., Yee, M., Porteous, H., and Springsteel, F.: IEEE Standard Computer Dictionary: Compilation of IEEE Standard Computer Glossaries, IEEE Press Piscataway, NJ, USA, 1991.\n* Greiner (1999) Greiner, B.: Euler rotations in plate-tectonic reconstructions, Comput. Geosci., 25, 209-216, 1999.\n* Gurnis et al. (2012) Gurnis, M., Turner, M., Zahriovic, S., Dicaprio, L., Spasojevich, S., Muller, R. D., Boyden, J., Seton, M., Manea, V. C., and Bower, D. J.: Plate Tectonic Reconstructions with Continuously Closing Plates, Comput. Geosci., 38, 35-42, doi:10.1016/j.cageo.2011.04.014, 2012.\n* Heim (2007) Heim, M.: Exploring Indiana Highways: Trip Trivia, Exploring America's Highway, Travel Organization Network Exchange, Inc., Wabasha, 2007.\n* Hellinger (1981) Hellinger, S. J.: The uncertainties of finite rotations in plate tectonics, J. Geophys. Res., 86, 9312- A compressible thermo-chemical mantle convection code, 01, American Geophysical Union, Fall Meeting 2007, abstract #DI14A-01, 2007.\n* [Vretanos(2005)Vret\n\n"
\n\nFigure 33: Screenshot showing GPlates displaying geometries (green) defined in GeoSciML data, which is retrieved by Web Feature Service (WFS) from a Geosciences Australia website. The service base URL is [http://www-a.ga.gov.au/geows/geologicunits/oneg_aus_2_5m/wfs](http://www-a.ga.gov.au/geows/geologicunits/oneg_aus_2_5m/wfs). The WFS request is shown in the dialog (left). All GPlates layers are listed in the dialog (right).\n\nplate-tectonic information modelling and software - a generation in which plate-tectonic data and applications are an integrated visualisation and processing component within a data grid and computational grid; and a plate-tectonic reconstruction is no longer an isolated result, but a single stage in an adaptable workflow.\n\nWe wish to acknowledge James Boyden and James Clark as pioneers of initial GPGIM development and their substantial contributions to GPlates. X. Q., R. D. M., J. C. and T. C. W. L. are supported by ARC grant FL0992245, C. H. was supported by ARC grant LP0989312, and GPlates and GPGIM development was supported by the AuScope NCRIS project (www.auscope.org.au/) in Sydney.\n\n
Write a summary of the passage below.
295
arxiv-format/2303_08454v1.md
# Range-Aided LiDAR-Inertial Multi-Vehicle Mapping in Degenerate Environment Zhe Jin, Chaoyang Jiang This work is supported by the National Natural Science Foundation of China(No.52002026, U2OA20333), and the National Key Research and Development Project (No. 2020YFC1512500)(_Corresponding author: ChaoyangangJiang_).The authors are with the School of Mechanical Engineering, Beijing Institute of Technology, Beijing, China, 100081 (email: [email protected]; [email protected]; ## I Introduction Multi-vehicle simultaneous localization and mapping (SLAM) has been widely used for search and rescue, maintenance investigations, underwater detection, and space exploration [1]. It is a great challenge for a single vehicle to handle the tasks in large-scale and degenerate environments while multi-vehicles working together have great potential to improve mapping accuracy and efficiency. Therefore, multi-vehicle collaborative mapping systems have increasingly attracted attention in recent years [2]. Features in large-scale and degenerate environments are usually sparse which leads to great accumulate errors for SLAM systems. Fortunately, range sensors are invulnerable in degenerate environments in the absence of shading. On one hand, range constraints are simpler and more efficient than finding loop closures for collaborative mapping; on the other hand, range factors can be easily introduced into a pose graph optimization (PGO) procedure. Therefore, range-aided multi-vehicle SLAM has great potential to improve the robustness of localization and mapping in degenerate environments. ### _Related works_ The multi-vehicle mapping has two main branches: centralized mapping and decentralized mapping [3]. Centralized mapping systems collect and optimize messages from all connected vehicles. Riazuelo et al. [4] proposed a typical centralized mapping system in which the expensive map optimization and storage were allocated on a cloud server while a light camera tracking client run on a local computer. Deutsch et al. [5] further introduced a software framework for real-time multi-vehicle collaborative SLAM which can potentially work with various SLAM algorithms. They both require an external server for the aggregation of data and information feedback, and thus network delays become a hidden problem. In contrast, Dube et al. [6] shifted the master node into one of the vehicles and proposed a fully-integrated online multi-vehicle SLAM system, which saves the long-distance communication but requires a high-performance onboard processor. Decentralized methods do not rely on a central server and split the computation to each vehicle node. Choudhary et al. [1] proposed a set of distributed algorithms for pose graph optimization in which vehicles communicate and exchange relative measurements only when the rendezvous is detected. Different from [1], inter-vehicle communications and pose-graph optimization are real-time implemented in [7]. Lajoie et al. [8] then extended and improved the above two methods [1, 7], and proposed DOOR-SLAM, a fully distributed SLAM system with an outlier rejection mechanism that can work with less conservative parameters. The above-mentioned multi-vehicle mapping systems applied loop detection of inter-or-intra vehicles to address data association and have achieved great progress. However, they still cannot work well in degenerate environments, especially when environmental characteristics are similar. Degeneracy is caused by fewer constraints in some directions, leading to less robustness for state estimation. The characteristics of degenerate environments include lacking geometrical, textural, and/or thermal features. Zhang et al. [9] first proposed a degeneration detection method and separated the degenerate directions in the state space to reduce the influence of the degeneracy in structured environments. Similarly, Hinduja et al. [10] only optimized the pose graph in well-constrained directions of the state space. These directions were selected based on a dynamic threshold and real-time updated. Extending the above two methods, Ren et al. [11] proposed a reliable degeneracy indicator that can evaluate the scan-matching performance in off-road environments. The evaluated degeneracy indicator was then integrated into afactor graph optimization framework. However, these methods [9, 10, 11] only adopted a single sensor and were unable to optimize the degenerate dimension. Khattak et al. [12] utilized a visual-inertial odometry and a thermal-inertial odometry to find robust priors for LiDAR pose estimation. One of the two odometry was selected for propagation when LiDAR odometry failed due to degeneration, which can improve the reliability of the pose estimation. Great progress has been achieved in past decades, but robust mapping is still a big challenge in degenerate environments. Degenerate environments have no influence on the distance observations of range sensors like Bluetooth, ultra-wideband (UWB) ranging sensors, Zigbee and WiFi. Song et al. [13] fused LiDAR and UWB measurements for single-vehicle localization, and allowed the unknown anchors to change their positions. To some extent, it was more robust and resisted degeneration. Similarly, applying more sensors like inertial measurement unit (IMU), light detection and ranging sensors (LiDAR), and camera, Nguyen et al. [14] performed a comprehensive optimization-based estimator for the state of an unmanned aerial vehicle. Both methods [13, 14] depend on preset anchors which greatly limits their applications for multi-vehicle cases. Xu et al. [15] proposed a decentralized state estimation system, fusing stereo wide-field-of-view cameras and UWB sensors for a multi-vehicle case. Similarly, Nguyen et al. [16] proposed a visual-inertial-UWB multi-vehicle localization system that loosely fuses the UWB and visual-inertial odometry data while tightly fusing all onboard sensors. Both methods achieved a great localization improvement but only in small-scale and undegenerate environments. The current range-aided methods focus on localization with or without anchor beacons but few of them focused on mapping in degenerate environments. Prior related works on multi-vehicle mapping are rich, but they still have further room for improvement: 1) range-aided multi-vehicle mapping systems with fixed anchors can hardly extend to large-scale environments due to the requirement for numerous anchors while those without anchors still cannot work well in degenerate environments; 2) centralized systems rely on a central server which is vulnerable, and decentralized systems cannot easily achieve a globally consistent map in real time; 3) most anti-degenerate methods ignore the information of degeneration directions or compensate with other sensors like thermal sensors that depend on environmental features; 4) few works cope with the degeneration correction. ### _Contribution_ Considering the above-mentioned problems, we propose the RaLI-Multi: a range-aided LiDAR-inertial multi-vehicle mapping system. Each vehicle performs a local mapping procedure with IMU integration, LiDAR feature extraction and registration, degeneration detection, and degeneration correction. Range measurements compensate for the error in the degenerate direction when both the degenerate level and the gap between the LiDAR-inertial odometry and the range measurements exceed their preset thresholds. The RaLI-Multi dynamically schedules one vehicle as an anchor vehicle which stops and can be viewed as an anchor for range measurement. The anchor vehicle also acts as a temporary central server, which receives local maps, LiDAR-inertial odometry, and range constraints between vehicles to optimize and broadcast the global map that in turn updates the local states of each vehicle. The main contributions of this paper are as follows: 1. We propose a multi-metric weights LiDAR-inertial front-end, which assigns weights to each feature point and can achieve better odometry in degenerate environments. 2. A geometry-based degeneration detection method is proposed as the foundation of the following degeneration correction module, which can online monitor the degeneration level and estimate the corresponding degenerate direction. 3. The range-aided degenerate correction module compensates the error of LiDAR-inertial odometry from the degeneration direction which is considered as the main component of the pose estimation error. In this way, we can improve the robustness of the mapping systems in degenerate environments. 4. The proposed RaLI-Multi has both advantages of centralized and decentralized methods. All vehicles have communications with the central node and share the same global map. The anchor vehicle plays the role of the central node, which can dynamically shift to other vehicles. Hence, the proposed system is more robust and flexible, which has potential to apply in large-scale degenerate environments. ### _Notations and Outline_ We denote a point cloud set captured by a 3D LiDAR sensor on a vehicle as \\(\\mathbf{\\mathcal{L}}\\), and denote a processed feature cloud and normal cloud as \\({}^{\\mathcal{F}}\\mathcal{L}\\) and \\({}^{\\mathcal{N}}\\mathcal{L}\\), respectively. Range measurements between vehicle \\(j\\) and vehicle \\(k\\) are denoted by \\(u_{i}^{jk}\\in\\mathbf{\\mathcal{U}}\\). The elements of these sets are presented with subscript of time sequence, e.g., \\(\\left(\\cdot\\right)_{t}\\) or \\(\\left(\\cdot\\right)_{i}\\). \\(\\mathcal{X}\\) is the vehicle state including position, orientation, velocity, etc. For simplicity, we also represent the position of a vehicle as \\(\\mathbf{x}\\in\\mathbb{R}^{3}\\). The initial pose transformation between tag vehicles and the anchor vehicle are denoted by \\(\\mathbf{\\mathcal{T}}=\\left\\{\\mathcal{T}^{1},\\mathcal{T}^{2},\\mathcal{T}^{3}, \\cdots\\right\\}\\), \\[\\mathcal{T}^{v}=\\left[\\begin{array}{cc}\\mathbf{R}^{v}&\\mathbf{t}^{v}\\\\ 0&1\\end{array}\\right]\\in SE\\left(3\\right),v=\\left\\{1,2,3,\\cdots\\right\\} \\tag{1}\\] where \\(\\mathbf{R}^{v}\\in SO\\left(3\\right)\\) and \\(\\mathbf{t}^{v}\\in\\mathbb{R}^{3}\\) are the rotation matrix and the translation vector, respectively. The corresponding quaternion of the rotation is represented by Hamilton notation. The rest of this paper is organized as follows. Section II provides the overview. Section III proposes details of the RaLI-Multi mapping system. Experiment results are shown in Section IV, and conclusions are given in Section V. ## II Overview ### _System Definition_ We propose a range-aid LiDAR-inertial multi-vehicle mapping system, in which all vehicles take the same onboardhardware and software. Each vehicle has an IMU, a LiDAR, a range sensor, a router, and a computing unit. All vehicles have two roles: the anchor role and the tag role, but they cannot be activated simultaneously. If the anchor role is activated, the vehicle acts as an anchor vehicle and vice versa. During the exploration, one of the vehicles is automatically selected to be the anchor. 'The anchor vehicle' also plays the role of the central node of such a multi-vehicle network. All other vehicles are called 'the tag vehicle'. The RaLI-Multi mapping procedure consists of continuous exploration rounds, as shown in Fig. 1. Each round begins with the tag-vehicle exploration and ends with the anchor-vehicle selection. In the first round, a dynamical initialization (see Section III-D) is required, which estimates the relative transformation between the global frame (the coordinate frame of the initial anchor vehicle) and local frames (coordinate frames of the initial tag vehicles). When all tag vehicles finish their exploration, the role of the anchor and the central node shifts from one vehicle to another. A tag vehicle finishes its exploration in the current round if one of the following three events is triggered: 1) the Received Signal Strength Indicator (RSSI) of the communication is less than a pre-defined threshold; 2) the distance with the anchor vehicle exceeds a pre-defined value; 3) the environment around the tag vehicle has been fully explored. ### _Problem Formulation_ We aim to reconstruct 3-D maps for large-scale environments with degeneration via multiple vehicles. Our main ideas are applying range observations between the anchor vehicle and all tag vehicles for degeneration correction, and utilizing communications and range observations for the improvements of the global mapping and pose estimation of all vehicles. Consequently, this work mainly focuses on the following three problems: 1. How to correct the localization and mapping for degenerate cases? 2. How to globally optimize the mapping and the pose estimation of all vehicles in such a RaLI-Multi mapping system? 3. How to dynamically select the role of the anchor vehicle? ### _System Overview_ The structure of the tag-vehicle exploration is shown in Fig. 2. The anchor vehicle stays stationary while all other vehicles, i.e., the tag vehicles, explore the environment. Each tag vehicle performs a LiDAR-inertial odometry, a degeneration detection module, a degeneration correction module with the range measurements from the anchor vehicle, and a local PGO. With the information received from tag vehicles, the anchor vehicle optimizes the poses of tag vehicles and the global map. When all tag vehicles finish their exploration, one of the vehicles is dynamically selected to be the anchor vehicle in the anchor Fig. 1: Illustration of two exploration rounds. Blue dotted lines represent the trajectories of tag vehicles and green dashed lines represent range measurements. In the former round, vehicle 2 is selected to be the anchor vehicle and vehicles 1 and 3 are tag vehicles. During exploration, vehicle 3 detects degeneration at the time stamps 15 and 4, which is then corrected by the range measurements between vehicle 2 and vehicle 3. At 4, both tag vehicles finish their exploration. Meanwhile, the anchor role is transferred to vehicle 3, and the latter round starts. 4 in the former round and 0 in the latter is the same time stamp. Fig. 2: System structure of the tag-vehicle exploration. At the end of each round, a new vehicle is selected to be the anchor vehicle via the anchor transfer module on the current anchor vehicle. Tag roles are then triggered for the rest tag vehicles. transfer module, followed by the next exploration round. Each tag vehicle first preprocesses the raw data received from its onboard IMU, LiDAR, and the range sensor. The observations of IMU are pre-integrated (see Section III-A1). Features are extracted from the point cloud of LiDAR (see Section III-A2), and the range measurements are pre-smoothed. Then, the LiDAR-inertial front-end takes pre-integrated IMU states as an initial guess to perform scan-to-map registration (see Section III-A3). Meanwhile, the features are used for degeneration detection (see Section III-B), and the range constraints are used for degeneration correction (see Section III-C2). Finally, the corrected LiDAR odometry, IMU pre-integration, and range constraints are jointly optimized via a local PGO in the back end. With the above procedure, the first question mentioned in Section II-B is answered. During a round, local data from each tag vehicle are sent to the anchor vehicle after local PGO for global PGO. If tag vehicles have stable range signals between each other and the RSSI is stronger than the pre-set threshold, these range measurements are also added to global PGO (see Section III-C1). The anchor vehicle then performs an incremental global optimization and map merging (see Section III-E). After global optimization, the anchor vehicle broadcasts the global map and the optimized states of all tag vehicles to each tag vehicle. In this way, we solve the second problem mentioned in Section II-B. Like the classical frontier-based exploration method [17], we define frontiers as the boundary between known free space and unknown space. If no frontiers exist, the environment is regarded as fully explored. When all tag vehicles finish their exploration, the current anchor vehicle starts to select the next anchor vehicle (see Section III-F). The frontiers of each tag vehicle are combined if they are close to each other. The vehicle that is closest to the center of the largest frontier is selected as the new anchor. In such a manner, the third problem mentioned in Section II-B is figured out. To easily understand the workflow of the RaLI-Multi, a two-vehicle example is shown in Fig. 3, which tells the procedure of how the two vehicles explore a corridor-like environment. The global coordinate frame is fixed on the local coordinate frame of the blue triangle, i.e., the initial anchor vehicle. Before mapping, an initial relative pose prior between two vehicles consisting of a range measurement and pre-set parameters is added to the pose graph, as shown in the yellow rectangle. Next, the tag vehicle, i.e., the orange triangle, begins to explore around, as shown in Fig. 3 (b). During this period, range measurements between two vehicles constrain the poses of the tag vehicle and reduce the influence of degeneration. Meanwhile, the anchor vehicle receives poses and corresponding LiDAR point clouds of the tag vehicle to perform initialization and incremental global pose graph optimization. After the tag vehicle finishes its exploration, the anchor vehicle transfers optimized results back to the tag vehicle. Finally, two vehicles exchange the roles of tag and anchor to start the next round of exploration, as shown in Fig. 3 (c). ## III RaLI-Multi Mapping System ### _LiDAR-inertial Odometry_ #### Iii-A1 IMU Pre-integration IMU pre-integration was first introduced by Forster et al. in [18] to reduce recomputation when changing linearization points. However, it can also be seamlessly integrated whether visual-inertial, LiDAR-inertial, or other inertial-related pipelines under the holistic framework of factor graphs. Here, we use the same procedure as [18], and ignore the details of IMU pre-integration. #### Iii-A2 Feature Extraction As pointed out in Ye et al. [19], edge points can hardly improve the results of the LiDAR-inertial odometry in practice. Additionally, extracting edge points is time-consuming, and we find that edge points bring larger errors than planar points due to the less horizontal resolution of LiDAR sensors. As a result, we only extract planar points. We first downsample the raw point cloud and call the four nearest points of each candidate point as the neighbor points, which are found by the \\(k\\)-d tree, shown in Fig. 4. The distances between each neighbor point and the candidate point should be less than double of the downsample resolution. Furthermore, the neighbor points should distribute in three different rings. The candidate point and two neighbor points are on the same ring, as shown in the blue points in Fig. 4. The rest two neighbor points are in the nearest rings, as shown in the orange and green points respectively in Fig. 4. Two unit normal vectors, \\(\\mathbf{n}_{G}\\) and \\(\\mathbf{n}_{O}\\), as shown in the green and orange arrows, are the cross products of two vectors, i.e., dash lines with corresponding colors in Fig. 4. Finally, the angle between the two normal vectors is calculated via their dot product: \\(\\theta=\\cos^{-1}\\left\\langle\\mathbf{n}_{G},\\mathbf{n}_{O}\\right\\rangle\\). The point is selected as a planar point if \\(\\theta\\) is less than a pre-set threshold. Otherwise, this point will be rejected. The normal vector of the planar point can be defined as the unit vector of the summation of two normal vectors, i.e., \\(\\mathbf{n}_{i}=\\frac{\\mathbf{n}_{G}+\\mathbf{n}_{O}}{|\\mathbf{n}_{G}+\\mathbf{n}_{O}|}\\). #### Iii-A3 Scan-to-Map Matching with Multi-Metric Weights We propose a group of multi-metric weights of LiDAR points and apply the relative transformation obtained from IMU pre-integration as the initial guesses to update the front end. The source cloud is the planar feature cloud \\({}^{\\mathcal{F}}\\mathcal{L}\\) extracted in the former part and the target cloud is the submap consisting of the nearest \\(N_{kf}\\) keyframes in the local map of each vehicle. Our scan matching module then estimates the pose of the current point cloud in the submap coordinate system. Fig. 3: Illustration of a two-vehicle mapping system exploring a corridor-like environment. The orange and blue triangles represent two vehicles. The yellow rectangle is the initial relative pose constraint. Green dashed lines are range measurements. Orange and blue dotted lines are the trajectories of two vehicles. For each iteration, we first transform a point to the submap frame. The neighbor points in the submap are determined by the nearest neighbor search within a pre-set range threshold with the origin as the center of the current point. Then, we estimate the normal vector \\(\\mathbf{n}_{j}\\) the same as extracting planar feature points. The optimal pose \\(\\mathcal{X}_{i}\\) is given by the resolution of the point-to-plane distance cost function, \\[r_{\\mathcal{L}}\\left(\\mathcal{X}_{i},\\mathbf{\\mathcal{L}}_{i}\\right)=\\operatorname* {argmin}_{\\mathcal{X}_{i}}\\sum_{j}\\rho\\left(\\omega_{j}\\left\\langle\\mathbf{R}_{i}\\bm {p}_{j}+\\mathbf{t}_{i}-\\mathbf{p}_{j}^{center},\\mathbf{n}_{j}\\right\\rangle\\right) \\tag{2}\\] where \\(\\rho\\left(\\cdot\\right)\\) is a Huber lost function and \\(\\mathbf{R}_{i}\\) and \\(\\mathbf{t}_{i}\\) are the rotation matrix and the translation vector of \\(\\mathcal{X}_{i}\\), respectively. \\(\\mathbf{p}_{j}\\) is the current point and \\(\\mathbf{p}_{j}^{center}\\) is the mass centroid of the neighbor points. The multi-metric weight \\(\\omega_{j}\\) is \\[\\omega_{j}=\\eta_{r}\\omega_{j}^{range}+\\eta_{n}\\omega_{j}^{neighbor}+\\eta_{k} \\omega_{j}^{kinematic}, \\tag{3}\\] where \\[\\omega_{j}^{range}=\\frac{1}{1+e^{-\\frac{2.5}{l_{Q3}}\\left(r_{j}-l_{Q2}\\right)}}, \\tag{4}\\] \\[\\omega_{j}^{neighbor}=\\left\\{\\begin{array}{cc}\\frac{n_{j}^{neighbor}}{N_{ neighbor}},&n_{j}^{neighbor}<N_{neighbor}\\\\ 1,&n_{j}^{neighbor}\\geq N_{neighbor}\\end{array}\\right., \\tag{5}\\] \\[\\omega_{j}^{kinematic}=\\left\\{\\begin{array}{cc}\\cos^{-1}\\left\\langle p_{j}, n_{j}\\right\\rangle\\cdot r_{j},&\\delta\\theta_{j}>\\theta_{th}\\\\ 0,&else\\end{array}\\right., \\tag{6}\\] \\(\\eta_{r}\\), \\(\\eta_{n}\\) and \\(\\eta_{k}\\) are normalized weights (taken 0.5, 0.2 and 0.3 for all experiments, respectively). \\(\\omega_{j}^{range}\\) enhances the influence of far points. In (4), \\(r_{j}\\) represents the range of the current point \\(\\mathbf{p}_{j}\\). \\(l_{Q2}\\) and \\(l_{Q3}\\) are the second and the third quartile of all ranges in current feature points. \\(e\\) is a constant. \\(\\omega_{j}^{neighbor}\\) guarantees that the current point locates in a sphere area with a high point density. In (5), \\(n_{j}^{neighbor}\\) is the number of the neighbor points found by the nearest neighbor search and the search radius is usually set to be double of the point cloud downsample resolution. \\(N_{neighbor}\\) is a pre-set threshold relating to the sample resolution. \\(\\omega_{j}^{kinematic}\\) is designed for large rotation conditions. In (6), \\(\\delta\\theta_{j}\\) represents the rotation angle of the IMU pre-integration result and can be defined from quaternion \\(\\delta\\mathbf{q}_{j}=(w,x,y,z)\\) as \\(\\delta\\theta_{j}=\\tan^{-1}\\left(\\sqrt{x^{2}+y^{2}+z^{2}},w\\right)\\). \\(\\theta_{th}\\) is a pre-defined rotation angle threshold. #### Iii-A4 Keyframe Selection We find in experiments that common keyframe selection methods [20, 21] including both distance-based and rotation-based methods are unstable in an indoor or narrow environment, especially at the corner of a corridor. The distance-based keyframe selection methods are hard to obtain a keyframe at the corner of a corridor leading to less robustness when passing through the corner. The rotation-based methods can easily induce distortion of point clouds due to vehicle vibration. To efficiently select keyframes in indoor or narrow environments, we consider the overlap of two point clouds through Octree [22], which is faster than \\(k\\)-d tree in voxel searching. After transferring the current scan to the frame of the last keyframe, if the distance between a point in the current scan and its closest point in the last keyframe is less than double of the downsampled resolution, the point is labeled as overlap. If the ratio of overlap points in the current scan is less than a pre-set threshold, we select the current scan as a keyframe. ### _Geometry-based Degeneration Detection_ Firstly, we take two examples to present the degeneration detection method: a non-degenerate environment and a degenerate environment, as shown in Fig. 5 (a) and (c). The colors of points represent different clusters of normal vectors and are generated randomly. In Fig. 5 (a), red and green points represent mutually perpendicular walls, while brown points are the ground plane, and other colors, such as pink and purple for example, can be treated as noise points. However, in Fig. 5 (c), green points represent the wall that occupies most of the view and red points are the ground plane. Purple points are the other wall at an angle of approximately 45 degrees to the green-point wall. In order to better visualize the degeneracy of the environment, we project normal vectors from the three-dimensional sphere coordinate system to a two-dimensional plane coordinate system by applying the Fig. 4: Illustration of the feature extraction. Points on the same ring are represented as the same color. For simplicity, only three different rings are shown. The point with a red border represents the candidate point. (a) The candidate point is selected to be a planar point whose neighbor region is in a plane. (b) The candidate’s point is rejected. Fig. 5: Point clouds with random colors in (a) and (c), represent different clusters of normal vectors. (b) and (d) are corresponding normal vectors projected onto a two-dimensional plane. Stick marks in red and green colors on the coordinate axes represent the distribution of raw data on different axes, respectively. The brighter the color, the more normal vectors there are in this area. Mercator-like projection method. The results are shown in Fig. 5 (b) and (d), respectively. From the density maps, it is simple to identify walls perpendicular to the floor from yellow and green areas in both scenarios. However, ground points in both pictures and purple-points wall in Fig. 5 (c), colored in light blue, are not obvious due to fewer number, which are located around (20, -140) in Fig. 5 (b) and (80, 0), (-30, -130) in Fig. 5 (d). According to the above examples, we find that normal vectors in a degenerate environment are highly characterized. These vectors can be classified into finite clusters and the number of them in different clusters varies widely. Then, we formulate the degeneracy by analyzing the distribution of normal vectors through the Principal Components Analysis module (PCA). We treat normal vectors set as the normal cloud \\({}^{\\mathcal{N}}\\mathcal{L}\\), and the covariance matrix \\(\\mathbf{\\Sigma}_{n}\\) of \\({}^{\\mathcal{N}}\\mathcal{L}\\) is calculated as follows, \\[\\mathbf{\\Sigma}_{n}=\\frac{1}{N_{{}^{\\mathcal{N}}\\mathcal{L}}}\\sum_{i=1}^{N_{{}^{ \\mathcal{N}}\\mathcal{L}}}\\left(\\mathbf{n}_{i}-\\bar{\\mathbf{n}}\\right)\\left(\\mathbf{n}_{i}- \\bar{\\mathbf{n}}\\right)^{\\top} \\tag{7}\\] where \\(\\bar{\\mathbf{n}}\\) is the mass center of \\({}^{\\mathcal{N}}\\mathcal{L}\\) and \\(N_{{}^{\\mathcal{N}}\\mathcal{L}}\\) is the number of points in \\({}^{\\mathcal{N}}\\mathcal{L}\\). Then, eigenvalues \\(\\lambda_{1}\\geq\\lambda_{2}\\geq\\lambda_{3}\\geq 0\\) are determined by eigenvalue decomposition of \\(\\mathbf{\\Sigma}_{n}\\). The degeneration can occur in all directions separately or simultaneously. To simplify the problem, we make the following two assumptions: 1) Due to our vehicles moving on the ground, we assume that the LiDAR sensors will always observe the ground plane and vehicles will not degenerate in the vertical direction; 2) Unlike exploring the open terrain such as grassland, a desert, a lake, etc. where there are no sufficient constraints in all horizontal directions for the LiDAR odometry, we assume that only one direction is mainly degenerated in the horizontal plane. The typical examples include corridor, tunnel, underground passage, and so on. Thus, we merely consider the smallest two eigenvalues, i.e., \\(\\lambda_{2}\\) and \\(\\lambda_{3}\\), and the distribution of the normal cloud \\({}^{\\mathcal{N}}\\mathcal{L}\\) can be determined by the _degenerate degree_\\(\\sigma_{deg}\\) inspired by [23]: \\(\\sigma_{deg}=\\frac{\\lambda_{2}}{\\lambda_{3}}\\), and the _degenerate direction_ is the eigenvector of the smallest eigenvalue, i.e., \\(\\mathbf{e}_{3}\\). If the _degenerate degree_\\(\\sigma_{deg}\\) is less than a threshold, the environment is considered as degeneration. ### _Range Constraints for Degenerate Correction_ #### Iii-C1 Range Residuals Diverse sensors can be used for range measurement, such as UWB, Zigbee, WiFi, light sensors, and so on. All measurements are noisy. Considering the measurement noise, we online smooth the raw data of the range observations for a past time horizon with a least square smoother. Then, the residuals of range measurements between vehicle \\(j\\) and vehicle \\(k\\) at timestamp \\(i\\) can be formulated as, \\[r_{u}\\left(\\mathcal{X}_{i}^{v_{j}},\\mathcal{X}_{i}^{v_{k}},u_{i}^{jk}\\right) =\\left|\\mathbf{x}_{i}^{v_{j}}-\\mathbf{x}_{i}^{v_{k}}\\right|-u_{i}^{jk}+\\eta_{u_{i}^{jk}} \\tag{8}\\] where \\(\\mathcal{X}_{i}^{v_{j}}\\) and \\(\\mathcal{X}_{i}^{v_{k}}\\) represent states of two vehicles at the timestamp \\(i\\) obtained from the LiDAR-inertial odometry and \\(u_{i}^{jk}\\) represents the corresponding smoothed range measurement. \\(\\eta_{u_{i}^{jk}}\\sim\\mathcal{N}\\left(0,\\sigma_{u_{i}^{jk}}^{2}\\right)\\) represents the noise following a zero-mean Gaussian noise. #### Iii-C2 Degenerate Component Correction According to the distribution of features proposed in Section III-B, the environmental degeneration can be real-time monitored. If the degeneration is detected, and the gap between the LiDAR-inertial odometry and the range measurement exceeds a threshold, we can apply the range observation to reduce the position drift, based on the _degenerate direction_ calculated in Section III-B. As shown in Fig. 6, the corrected state \\(\\mathbf{x}_{k}^{correct}\\) should be located on the circle centered on the anchor vehicle with radius \\(u_{k}\\), which represents the range measurement between the anchor vehicle and the tag vehicle at the timestamp \\(k\\). We omit the superscript of \\(u_{k}\\) for simplicity. We view the state estimation \\(\\mathbf{x}_{k}^{deg}\\) as a vector with \\(\\mathbf{x}^{anchor}\\) as the origin. Since the gap is mainly due to the degeneration, the error vector of the estimated state \\(\\mathbf{x}_{k}^{deg}\\) is considered on the _degenerate direction_, which is represented by the unit eigenvector \\(\\mathbf{e}_{3}\\). We denote the magnitude of the error by \\(s\\). Then, constraining the problem on the XY coordinate, we can obtain the error \\(s\\mathbf{e}_{3}\\), which we call the compensation vector, from the equation \\[\\left|\\mathbf{x}_{k}^{deg}-\\mathbf{x}_{k}^{anchor}+s\\mathbf{e}_{3}\\right|=u_{k}+\\eta_{u_{ k}}. \\tag{9}\\] With \\(s\\mathbf{e}_{3}\\), we can correct the influence of the degeneration and obtain \\[\\mathbf{x}_{k}^{correct}=\\mathbf{x}_{k}^{deg}+s\\mathbf{e}_{3}. \\tag{10}\\] ### _Dynamical Initialization_ Before globally optimizing the pose graph, an anchor vehicle needs to unify the coordinate systems of all vehicles. To reduce the computational burden in the following exploration, we estimate the transformations between the global frame and each local frame in the first round of exploration and fix these transformations in the following rounds. At the beginning of the first exploration round, an initial anchor vehicle is randomly selected from all vehicles. The global frame is defined as the local frame of this initial anchor vehicle. As the RaLI-Multi is dynamically centralized, Fig. 6: Degeneration correction through the range information. The orange triangle is a static anchor vehicle and the rest two triangles represent a common tag vehicle where the light blue triangle is the degenerate state and the deep blue triangle is the corrected state. The green dotted line is the range circle and the green dashed lines are the corresponding radius. The red curly bracket shows the difference between the LiDAR-inertial odometry and the range measurement. The yellow dashed line and the purple arrow represent the degenerate direction and the compensation vector at timestamp \\(k\\), respectively. the anchor vehicle may be different in various exploration missions and therefore is the global frame. During initialization, each tag vehicle performs the odometry as described in Section III-A. Meanwhile, the anchor vehicle receives the odometry and local maps published by each tag vehicle, range measurements between two vehicles, and pre-set initial pose priors. When the size of local maps exceeds a pre-set threshold, the anchor vehicle starts to perform the initialization as follows, \\[\\operatorname*{argmin}_{{{}^{L}\\mathbf{\\mathcal{X}},\\mathbf{\\mathcal{T}}}} \\left\\{{\\sum\\limits_{v\\in V}{\\left\\|{r_{\\mathcal{L}}^{v}\\left({{}^{L}\\mathbf{ \\mathcal{X}}^{v},\\mathbf{\\mathcal{L}}^{v}}\\right)}\\right\\|_{\\mathbf{P}_{\\mathcal{L }}^{-1}}}^{2}+r_{scan2map}^{v}\\left(\\mathbf{\\mathcal{T}}\\right)}\\right. \\tag{11}\\] where \\(\\mathbf{\\mathcal{V}}\\) represents the set of tag vehicles and \\(\\mathbf{\\mathcal{V}}_{a}\\) represents the set of vehicles in the RaLI-Multi, including an anchor vehicle and all tag vehicles. \\(r_{\\mathcal{L}}^{v}\\left({{}^{L}\\mathbf{\\mathcal{X}}^{v},\\mathbf{\\mathcal{L}}^{v}}\\right)\\) is the LiDAR-inertial odometry residual and \\({{}^{L}\\mathbf{\\mathcal{X}}^{v}}\\) is a set of local poses of vehicle \\(v\\). \\(r_{scan2map}^{v}\\left(\\mathbf{\\mathcal{T}}\\right)\\) is the scan-to-map registration residual of all tag vehicles where the scan represents the point cloud captured by an anchor vehicle and the map is a local map from the corresponding tag vehicle. \\({{}^{L}\\mathcal{X}_{i}^{v_{j}},\\,{}^{L}\\mathcal{X}_{i}^{v_{k}}}\\) are local poses of two vehicles with range constraints \\(u_{i}^{jk}\\). \\(r_{u}\\left({{}^{L}\\mathcal{X}_{i}^{v_{j}},\\,{}^{L}\\mathcal{X}_{i}^{v_{k}}, \\mathcal{T}^{v_{j}},\\mathcal{T}^{v_{k}},u_{i}^{jk}}\\right)\\) is the range constraint between two vehicles and is defined as, \\[\\begin{split}& r_{u}\\left({{}^{L}\\mathcal{X}_{i}^{v_{j}},\\,{}^{L} \\mathcal{X}_{i}^{v_{k}},\\mathcal{T}^{v_{j}},\\mathcal{T}^{v_{k}},u_{i}^{jk}} \\right)=\\\\ &\\quad\\left|{\\mathbf{R}^{v_{j}}{{}^{L}\\mathbf{x}_{i}^{v_{j}}}+\\mathbf{t}^{v_ {j}}-\\mathbf{R}^{v_{k}}{{}^{L}\\mathbf{x}_{i}^{v_{k}}}-\\mathbf{t}^{v_{k}}}\\right|-u_{i}^{jk} +\\eta_{u_{i}^{jk}}.\\end{split} \\tag{12}\\] ### _Incremental Global PGO and Map Merging_ During the exploration of tag vehicles, an anchor vehicle serves as a temporal base station, processing incremental global PGO and map merging. Messages transferring from tag vehicles to an anchor vehicle include vehicle poses optimized by local PGO, corresponding LiDAR point clouds, and range measurements between two vehicles, tag-to-tag or tag-to-anchor. The optimization progress is similar to scan-to-map matching described in Section III-A and range constraints in Section III-C1. To reduce the computational burden of an anchor vehicle, we reduce the iteration number of the global optimization if no degeneration occurs, and only optimize poses in the current exploration round when there are no loop closures at the system level. At the end of each exploration round, an anchor vehicle publishes the global map and optimized poses to corresponding tag vehicles. Hence, all vehicles share the same global map. ### _Dynamically Anchor Role Selection_ After all tag vehicles finish their exploration, the next anchor vehicle is selected. Finish conditions are listed as three cases described in Section II-A. In the third case, if all frontiers in the exploration area of a tag vehicle have been examined, the exploration of this tag vehicle in the current round is finished. Then, the selection of next anchor vehicle is determined by frontiers. The current anchor vehicle combines frontiers received from each tag vehicle and finds the largest frontier area. Finally, the vehicle closest to the center of the largest frontier is selected as the new anchor. ## IV Experiments ### _Implementation_ We perform three experiments to evaluate the proposed methods: the LiDAR-inertial odometry analysis (exp1), the RaLI-Multi with two vehicles in a long corridor-like environment (exp2), and the RaLI-Multi with three vehicles in a complex environment (exp3). The first experiment is mainly designed for evaluating the accuracy of the LiDAR-inertial odometry. UWB anchors applied in exp1 are shown in Fig. 7 (a) to provide a reference trajectory. Fig. 7 (b-c) show the scenarios in exp2 and exp3. Fig. 7 (d) shows unmanned ground vehicles with LiDAR (RoboSense RS-LiDAR-16), UWB (Nooploop LinkTrack P-B), and IMU (Xsens). In exp2 and exp3, the UWB node on each vehicle is applied for inter-vehicle distance measurement, and we use the UWB model proposed by Nguyen et al. [24]. Specifically, our experimental vehicles equip with differential steering and spring-damped suspension and there is high friction between rubber tires and tiled floors. These reasons lead to vehicles being prone to sharp changes in height when steering, which probably results in large errors in the Z-axis. We implemented the proposed RaLI-Multi in C++ and Robots Operating System (ROS). We use the GTSAM [25] framework for the local and global PGO. The Levenberg-Marquardt algorithm is used to solve the pose graph optimization. Trajectory errors in exp1 are calculated by EVO [26] and Fig. 7: (a) The first experimental scenario with UWB anchors. (b) and (c) are experimental scenarios in exp2 and exp3, respectively. (d) Hardware setup of the RaLI-Multi. point cloud map errors are estimated by point-to-mesh distance in CloudCompare1 after a coarse-to-fine alignment. Footnote 1: [https://github.com/CloudCompare/CloudCompare](https://github.com/CloudCompare/CloudCompare) ### _Degeneration Analysis_ We first analyze the degenerate level in exp1 and exp2 as shown in Fig. 8 (a) and Fig. 8 (b), respectively. The lower the degenerate degree, the higher the degenerate level. Red crosses in Fig. 8 (a) are locations of UWB anchors which ensure that each vehicle receives at least four range measurements anywhere along the route. In order to clearly illustrate the degenerate level, we present both the spatial and temporal dimensions. In the spatial part, degenerate values are higher in a corner than in a straight corridor, and the longer the corridor, the lower the degenerate value. As shown in the x-y coordinate system, degenerate values in corners are colored in red or green while straight corridors are mostly in blue. In the temporal part, we find that degenerate values of exp1 are higher than that of exp2. It corresponds with that exp1 is less degenerate than exp2. In our experiments, we define a place that is degenerate when its degenerate value is smaller than 3.0. ### _LiDAR-inertial Odometry Evaluation_ Firstly, we discuss the performance of the LiDAR-inertial odometry in exp1. As a result of the lack of ground truth, we place UWB anchors around the environment to measure the distances between a tag and anchors via Time of Flight (TOF). We then calculate the tag coordinates with these distances. Before the experiment, we pre-deploy UWB beacons as shown in Fig. 7 (a). These UWB beacons are placed at different heights to monitor the height variation of these vehicles. Theoretically, it is enough for three UWB beacons to estimate the position of a target. However, considering the robustness and preciseness of the proposed distributed localization system, we redundantly arrange beacons and make sure that each vehicle can receive more than three UWB beacon signals wherever in exp1. Fig. 8: Degeneration degree in exp1 and 2. Fig. 9: Trajectory and orientation results of different methods in exp1. We compare the proposed LiDAR-inertial odometry with DLO [20], A-LOAM2, LeGO-LOAM [27], FAST-LIO2 [28] and LIO-SAM [21], as shown in Fig. 9 (a) and (b). We can see that LeGO-LOAM and LIO-SAM failed in exp1 where the former degenerates at the beginning of this experiment and the latter degenerates when entering the corridor located near (10, 8.5) in Fig. 9 (a). As a result, we exclude two of them in Fig. 9 (b). Among the rest four methods, FAST-LIO2 and A-LOAM also drifts in various degrees. Similar to LIO-SAM, A-LOAM starts to drift next to (10, 8.5) in Fig. 9 (a), especially on the y-axis. In contrast, FAST-LIO2 drifts after the last corner, in the vicinity of (50, 14) in Fig. 9 (a). The odometry degeneration mostly occurs on the x-axis and the drifted point cloud map can be seen in Fig. 10 (a). Moreover, both A-LOAM and FAST-LIO2 have an obvious deviation in the z-axis and pitch. Comparing DLO and the proposed method, both of them resist the degeneration and show little difference to the reference trajectory, labeled ground truth (GT) in Fig. 9 (a). However, DLO drifts more on the z-axis than the proposed method. Trajectory errors compared with reference are illustrated in table I through EVO APE (Absolute Pose Error), \\(APE_{i}=\\|E_{i}\\|\\), where \\(E_{i}=x_{est,i}-x_{ref}\\). MEAN and RMSE in table I can be calculated from the APEs of all timestamps as follows, Footnote 2: [https://github.com/HKUST-Aerial-Robotics/A-LOAM](https://github.com/HKUST-Aerial-Robotics/A-LOAM) \\[\\mathrm{MEAN}=\\frac{1}{N}\\sum_{i=1}^{N}APE_{i} \\tag{13}\\] \\[\\mathrm{RMSE}=\\sqrt{\\frac{1}{N}\\sum_{i=1}^{N}APE_{i}^{2}} \\tag{14}\\] We evaluate the mapping results of DLO, A-LOAM, FAST-LIO2, and the proposed method. The ground truth of the point cloud map is established through CATIA 3D modeling and geometric dimensions are measured by laser measuring instruments. A point cloud map of the proposed method, shown in Fig. 10 (b), is labeled in different colors where magenta points are outlier points excluded in Cloud-to-Mesh (C2M) distance estimation and the rest colors corresponding to C2M distances, the same color as in Fig. 11 (c). Considering large drifts in the mapping results of FAST-LIO2, we ignore further evaluation. Results of the rest three methods are shown in Fig. 11 and table II. On one hand, as a result that A-LOAM records every LiDAR scan into the map, there are more pedestrian points in the final map than DLO and the proposed method. These outlier points are difficult to remove. Thus, the C2M distances statistic results of A-LOAM will be higher than the other two methods though we downsample its point cloud. On the other hand, A-LOAM suffers from degeneration as analyzed in the former paragraph. Both reasons lead to C2M distances of A-LOAM being much higher than those of DLO and the proposed method. Comparing Fig. 11 (a) and (c), the distribution of C2M distances of the proposed method is closer to zero than DLO. Moreover, the bar closest to zero occupies over half of the whole points while DLO is less than one-quarter. ### _Evaluation in a Long Corridor like Environment_ Next, we evaluate different methods in a long corridor-like environment. As a result that the narrow environment constrains the number of vehicles, we evaluate the RaLI-Multi system with two vehicles and they play 'the anchor role' in turn during the exploration. Since the environment in exp2 is longer and narrower than that in exp1, which is difficult to arrange our distributed localization system, we only evaluate the accuracy of the point cloud map. Trajectories are demonstrated in Fig. 12. LIO-SAM degenerated and failed at the end of the first long corridor, around (65, 2) in Fig. 12 (a). Although A-LOAM and LeGO-LOAM resisted degradation to some extent, they drifted in different axes, with A-LOAM mostly in the x-axis and LeGO-LOAM in XY-axes. Both of them drift at the same place as LIO-SAM. Moreover, A-LOAM drifts in the z-axis shortly after the beginning of this experiment. The rest three methods, DLO, FAST-LIO2, and the RaLI-Multi show similar results. Then, we evaluate the mapping results. The results of our RaLI-Multi and A-LOAM are shown in Fig. 13 (a) and (b), respectively. In the zoom area of 13 (b), gray points are point clouds sampled from the 3D reference model and green points show significant offsets from the reference. Due to large drifts of A-LOAM, C2M distances also distribute widely. As we can see from Fig. 13 (d), A-LOAM shows two local peaks near 0.65m and 1m. Although FAST-LIO2 and DLO show similar performance in table II, the peak of the C2M distance histogram of FAST-LIO2 in Fig. 13 (c) locates away from zero. Among these methods, the first histogram of C2M distances of the RaLI-Multi occupies the most percentage, over 20%, and most C2M distances are within 0.2m. Details of C2M distances are also shown in table II. vehicles and the benchmarks, DLO, A-LOAM, LeGO-LOAM, FAST-LIO2, and LIO-SAM, apply different trajectories, as shown in Fig. 14. Fig. 14 (b)-(d) show three exploration rounds of the RaLI-Multi in this experiment. In the first round, vehicles 1 and 2 are two tag vehicles and vehicle 3 is the initial anchor vehicle. Vehicle 1 explores the initial place while vehicle 2 explores in the right direction. The anchor, vehicle 3, receives information from vehicles 1 and 2 to perform the initialization and global optimization. After two tag vehicles finish their exploration, vehicle 2 is selected as the next anchor because only vehicle 2 has frontiers among tag vehicles. In the second round, both tag vehicles are heading in the bottom right direction. At the corner of the corridor, vehicle 1 stops and is selected as the next anchor. In the third round, vehicle 3 explores the bottom right area while vehicle 2 heads toward the end of the corridor. The mapping results are shown in table II and Fig. 15, where LeGO-LOAM, FAST-LIO2, and LIO-SAM failed in different places. We then evaluate the mapping results of the rest three methods. As shown in Fig. 15 (a)-(c), DLO and A-LOAM have higher C2M distances at the labeled area than the RaLI-Multi. From the zoom area of point cloud maps, A-LOAM drifts more in horizontal directions (XY axes) than DLO, demonstrated as Fig. 11: C2M distance results in exp1. Fig. 12: Trajectory and orientation results from different methods in exp2. green points, while DLO shows more errors in the vertical direction (Z axis), represented in green and yellow points. Combining Fig. 15 (d)-(f), most C2M distance histograms of the RaLI-Multi are within 0.2m while DLO and A-LOAM still have local peaks around 0.4m and 0.5m, respectively. Statistics of mapping results shown in table II also demonstrate the proposed method achieves a better result than the state-of-the-art. ## V Conclusion In this paper, we propose a range-aided LiDAR-inertial multi-vehicle mapping system for a large-scale environment with degeneration. The multi-metric weights LiDAR-inertial front-end assigns weights to each feature point, based on the distance and the neighbors of them, and the kinematics of the vehicle, which improves the performance in narrow and degenerate environments. The degeneration detection module can online monitor the degeneration via the distribution of normal vectors of feature points. The degenerate correction module can compensate for the LiDAR-inertial odometry along the degenerate direction. The dynamically centralized multi-vehicle system can robustly and flexibly operate in various complex and degenerate environments. Three experiments demonstrate that: 1) the proposed LiDAR-inertial front-end can resist degeneration and achieve Fig. 14: Trajectory schematic of vehicles in the benchmarks and the RaLI-Multi. (a) The trajectory schematic of the vehicle in the benchmarks. (b)-(d) Trajectory schematics of vehicles in the RaLI-Multi in three exploration rounds and each color represents a vehicle. Fig. 13: Mapping results in exp2. (a) is the mapping result of the RaLI-Multi. (b) is the mapping result of A-LOAM. The drifted area is detailed in partial enlargement and the gray points are sampled from the 3D model established in CATIA. (c)-(f) are C2M distances from DLO, A-LOAM, FAST-LIO2 and ours. Fig. 15: Mapping results in exp3. (a)-(c) are mapping results from DLO, A-LOAM and ours. (d)-(f) are corresponding C2M distances. better mapping results; 2) with the help of degeneration detection and correction, the proposed multi-vehicle system can obtain a low-drift global map in degenerate environments; 3) compared with the state-of-the-art, the RaLI-Multi is more robust in the three experiments. ## References * [1]S. Choudhary, L. Carlone, C. Nieto, J. Rogers, H. I. Christensen, and F. Dellaert (2017) Distributed mapping with privacy and communication constraints: lightweight algorithms and object-based models. The International Journal of Robotics Research36 (12), pp. 1286-1311. Cited by: SSI. * [2]T. M. Nguyen, M. Cao, S. Yuan, Y. Lyu, T. H. Nguyen, and L. Xie (2021) Viral-fusion: a visual-inertial-ranging-lidar sensor fusion approach. IEEE Transactions on Robotics. Cited by: SSI. * [3]T. M. Nguyen, A. H. Zaini, C. Wang, K. Guo, and L. Xie (2018) Robust target-relative localization with ultra-wideband ranging and communication. In 2018 IEEE international conference on robotics and automation (ICRA), pp. 2312-2319. Cited by: SSI. * [4]T. M. Nguyen, T. Nguyen, and L. Xie (2022) Flexible and resource-efficient multi-robot collaborative visual-inertial-range localization. IEEE Robotics and Automation Letters7 (2), pp. 928-935. Cited by: SSI. * [5]T. M. Nguyen, M. Cao, S. Yuan, Y. Lyu, T. H. Nguyen, and L. Xie (2021) Viral-fusion: a visual-inertial-ranging-lidar sensor fusion approach. IEEE Transactions on Robotics. Cited by: SSI. * [6]T. M. Nguyen, A. H. Zaini, C. Wang, K. Guo, and L. Xie (2018) Robust target-relative localization with ultra-wideband ranging and communication. In 2018 IEEE international conference on robotics and automation (ICRA), pp. 2312-2319. Cited by: SSI. * [7]T. M. Nguyen, T. Nguyen, and L. Xie (2022) Flexible and resource-efficient multi-robot collaborative visual-inertial-range localization. IEEE Robotics and Automation Letters7 (2), pp. 928-935. Cited by: SSI. * [8]T. M. Nguyen, T. Nguyen, and L. Xie (2022) Flexible and resource-efficient multi-robot collaborative visual-inertial-range localization. IEEE Robotics and Automation Letters7 (2), pp. 928-935. Cited by: SSI. * [9]T. M. Nguyen, T. Nguyen, and L. Xie (2022) A unified framework for autonomous exploration. In 2022 IEEE International Conference on Robotics and automation (ICRA), pp. 146-151. Cited by: SSI. * [10]T. M. Nguyen, T. Nguyen, and L. Xie (2022) Flexible and resource-efficient multi-robot collaborative visual-inertial-range localization. IEEE Robotics and Automation Letters7 (2), pp. 928-935. Cited by: SSI. * [11]T. M. Nguyen, T. Nguyen, and L. Xie (2022) A unified framework for multi-robot cooperative visual-inertial-range localization. IEEE Robotics and Automation Letters7 (2), pp. 104-113. Cited by: SSI. * [12]T. M. Nguyen, T. Nguyen, and L. Xie (2022) Flexible and resource-efficient multi-robot collaborative visual-inertial-range localization. IEEE Robotics and Automation Letters7 (2), pp. 928-935. Cited by: SSI. * [13]T. M. Nguyen, A. H. Zaini, C. Wang, K. Guo, and L. Xie (2018) Robust target-relative localization with ultra-wideband ranging and communication. In 2018 IEEE international conference on robotics and automation (ICRA), pp. 2312-2319. Cited by: SSI. * [14]T. M. Nguyen, T. Nguyen, and L. Xie (2022) Flexible and resource-efficient multi-robot collaborative visual-inertial-range localization. IEEE Robotics and Automation Letters7 (2), pp. 928-935. Cited by: SSI. * [15]T. M. Nguyen, M. Cao, S. Yuan, Y. Lyu, T. H. Nguyen, and L. Xie (2021) Viral-fusion: a visual-inertial-ranging-lidar sensor fusion approach. IEEE Transactions on Robotics. Cited by: SSI. * [16]T. M. Nguyen, A. H. Zaini, C. Wang, K. Guo, and L. Xie (2018) Robust target-relative localization with ultra-wideband ranging and communication. In 2018 IEEE international conference on robotics and automation (ICRA), pp. 2312-2319. Cited by: SSI. * [17]T. M. Nguyen, T. Nguyen, and L. Xie (2022) Flexible and resource-efficient multi-robot collaborative visual-inertial-range localization. IEEE Robotics and Automation Letters7 (2), pp. 928-935. Cited by: SSI. * [18]T. M. Nguyen, T. Nguyen, and L. Xie (2022) A unified framework for autonomous exploration. In 2022 IEEE International Conference on Robotics and automation (ICRA), pp. 146-151. Cited by: SSI. * [19]T. M. Nguyen, T. Nguyen, and L. Xie (2022) A unified framework for multi-robot cooperative visual-inertial-range localization. IEEE Robotics and Automation Letters7 (2), pp. 104-113. Cited by: SSI. * [20]T. M. Nguyen, T. Nguyen, and L. Xie (2022) A unified framework for multi-robot cooperative visual-inertial-range localization. IEEE Robotics and Automation Letters7 (2), pp. 104-113. Cited by: SSI. * [21]T. M. Nguyen, T. Nguyen, and L. Xie (2022) A unified framework for multi-robot cooperative visual-inertial-range localization. IEEE Robotics and Automation Letters7 (2), pp. 104-125. Cited by: SSI. * [22]T. M. Nguyen, M. Cao, S. Yuan, Y. Lyu, T. H. Nguyen, and L. Xie (2021) Viral-fusion: a visual-inertial-ranging-lidar sensor fusion approach. IEEE Transactions on Robotics. Cited by: SSI. * [23]T. M. Nguyen, A. H. Zaini, C. Wang, K. Guo, and L. Xie (2018) Robust target-relative localization with ultra-wideband ranging and communication. In 2018 IEEE international conference on robotics and automation (ICRA), pp. 2312-2319. Cited by: SSI. * [24]T. M. Nguyen, T. Nguyen, and L. Xie (2022) Flexible and resource-efficient multi-robot collaborative visual-inertial-range localization. IEEE Robotics and Automation Letters7 (2), pp. 928-935. Cited by: SSI. * [25]T. M. Nguyen, T. Nguyen, and L. Xie (2022) Flexible and resource-efficient multi-robot collaborative visual-inertial-range localization. IEEE Robotics and Automation Letters7 (2), pp. 928-935. Cited by: SSI. * [26]T. M. Nguyen, T. Nguyen, and L. Xie (2022) A unified framework for multi-robot cooperative visual-inertial-range localization. IEEE Robotics and Automation Letters7 (2), pp. 104-113. Cited by: SSI. * [27]T. M. Nguyen, T. Nguyen, and L. Xie (2022) A unified framework for multi-robot cooperative visual-inertial-range localization. IEEE Robotics and Automation Letters7 (2), pp. 104-125. Cited by: SSI. * [28]T. M. Nguyen, T. Nguyen, and L. Xie (2022) A unified framework for multi-robot cooperative visual-inertial-range localization. IEEE Robotics and Automation Letters7 (2), pp. 104-128. Cited by: SSI. * [29]T. M. Nguyen, T. Nguyen, and L. Xie (2022) Flexible and resource-efficient multi-robot collaborative visual-inertial-range localization. IEEE Robotics and Automation Letters7 (2), pp. 928-935. Cited by: SSI. * [30]T. M. Nguyen, T. Nguyen, and L. Xie (2022) A unified framework for multi-robot cooperative visual-inertial-range localization. IEEE Robotics and Automation Letters7 (2), pp. 104-113. Cited by: SSI. * [31]T. M. Nguyen, T. Nguyen, and L. Xie (2022) A unified framework for multi-robot cooperative visual-inertial-range localization. IEEE Robotics and Automation Letters7 (2), pp. 104-118. Cited by: SSI. * [32]T. M. Nguyen, T. Nguyen, and L. Xie (2022) A unified framework for multi-robot cooperative visual-inertial-robot SLAM. IEEE Robotics and Automation Letters66 (5), pp. 104-128. Cited by: SSI. * [33]T. M. Nguyen, A. H. Zaini, C. Wang, K. Guo, and L. Xie (2018) Robust target-relative localization with ultra-wideband ranging and communication. In 2018 IEEE international conference on robotics and automation (ICRA), pp. 2312-2319. Cited by: SSI. * [34]T. M. Nguyen, A. H. Zaini, C. Wang, K. Guo, and L. Xie (2018) Robust target-relative localization with ultra-wideband ranging and communication. In 2018 IEEE international conference on robotics and automation (ICRA), pp. 2312-2319. Cited by: SSI. * [35]T. M. Nguyen, T. Nguyen, and L. Xie (2022) Flexible and resource-efficient multi-robot collaborative visual-inertial-range localization. IEEE Robotics and Automation Letters7 (2), pp. 928-935. Cited by: SSI. * [36]T. M. Nguyen, M. Cao, S. Yuan, Y. Lyu, T. H. Nguyen, and L. Xie (2021) Viral-fusion: a visual-inertial-ranging-lidar sensor fusion approach. IEEE Transactions on Robotics. Cited by: SSI. * [37]T. M. Nguyen, A. H. Zaini, C. Wang, K. Guo, and L. Xie (2018) Robust target-relative localization with ultra-wideband ranging and communication. In 2018 IEEE international conference on robotics and automation (ICRA), pp. 2312-2319. Cited by: SSI. * [38]T. M. Nguyen, A. H. Zaini, C. Wang, K. Guo, and L. Xie (2018) Robust target-relative localization with ultra-wideband ranging and communication. In 2018 IEEE international conference on robotics and automation (ICRA), pp. 2312-2319. Cited by: SSI. * [39]T. M. Nguyen, A. H. Zaini, C. Wang, K. Guo, and L. Xie (2018) Robust target-relative localization with ultra-wideband ranging and communication. In 2018 IEEE international conference on robotics and automation (ICRA), pp. 2312-2319. Cited by: SSI. * [40]T. M. Nguyen, M. Cao, S. Yuan, Y. Lyu, T. H. Nguyen, and L. Xie (2021) Viral-fusion: a visual-inertial-ranging-lidar sensor fusion approach. IEEE Transactions on Robotics. Cited by: SSI. * [41]T. M. Nguyen, M. Cao, S. Yuan, Y. Lyu, T. H. Nguyen, and L. Xie (2021) Viral-fusion: a visual-inertial-ranging-lidar sensor fusion approach. IEEE Transactions on Robotics. Cited by: SSI. * [42]T. M. Nguyen, A. H. Zaini, and C. Wang (2022) A unified framework for multi-robot cooperative visual-inertial-inertial-robot SLAM. IEEE Robotics and Automation Letters7 (2), pp. 104-105. Cited by: SSI. * [43]T. M. Nguyen, A. H. Zaini, C. Wang, K. Guo, and L. Xie (2018) Robust target-relative localization with ultra-wideband ranging and communication. In 2018 IEEE international conference on robotics and automation (ICRA), pp. 2312-2319. Cited by: SSI. * [44]T. M. Nguyen, A. H. Zaini, C. Wang, K. Guo, and L. Xie (2018) Robust target-relative localization with ultra-wideband ranging and communication. In 2018 IEEE international conference on robotics and automation (ICRA), pp. 2312-2319. Cited by: SSI. * [45]T. Shan and B. Englot (2018) Lego-loom: lightweight and ground-optimized lidar odometry and mapping on variable terrain. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 4758-4765. Cited by: SSI. * [46]T. Shan and B. Englot (20
This paper presents a range-aided LiDAR-inertial multi-vehicle mapping system (RaIJ-Multi). Firstly, we design a multi-metric weights LiDAR-inertial odometry by fusing observations from an inertial measurement unit (IMU) and a light detection and ranging sensor (LiDAR). The degenerate level and direction are evaluated by analyzing the distribution of normal vectors of feature point clouds and are used to activate the degeneration correction module in which range measurements correct the pose estimation from the degeneration direction. We then design a multi-vehicle mapping system in which a centralized vehicle receives local maps of each vehicle and range measurements between vehicles to optimize a global pose graph. The global map is broadcast to other vehicles for localization and mapping updates, and the centralized vehicle is dynamically fungible. Finally, we provide three experiments to verify the effectiveness of the proposed RaIJ-Multi. The results show its superiority in degeneration environments. Multi-vehicle system, simultaneous localization and mapping, range measurement, degeneration detection and correction.
Summarize the following text.
214
arxiv-format/1901_03193v3.md
# Thermal convection, ensemble weather forecasting and distributed chaos A. Bershadskii ICAR, P.O. Box 31155, Jerusalem 91000, Israel ###### ## I Distributed chaos Systems with chaotic dynamics often have frequency power spectra with exponential decay [1]-[7]. For the systems described by dynamical equations with partial derivatives (in particular for the systems based on the Navier-Stokes equations) observations are less conclusive, especially for the wavenumber (spatial) power spectra. Figure 1 shows kinetic energy spectrum for a perturbation in statistically stationary isotropic homogeneous turbulence at Reynolds number \\(Re\\simeq 2500\\)[8] (the spectral data can be found at the site Ref. [9]). In this paper a direct numerical simulation (DNS) with the Navier-Stokes equations \\[\\frac{\\partial\\mathbf{u}(\\mathbf{x},t)}{\\partial t}+(\\mathbf{u}\\cdot\ abla)\\mathbf{u}=-\ abla p +\ u\\Delta\\mathbf{u}+\\mathbf{f} \\tag{1}\\] \\[\ abla\\cdot\\mathbf{u}(\\mathbf{x},t)=0 \\tag{2}\\] was performed and a velocity field realization \\(\\mathbf{u}_{1}\\) was transformed into a new realization \\(\\mathbf{u}_{2}\\) by a slight instant perturbation of the forcing \\(\\mathbf{f}(\\mathbf{x},t)\\). Power spectrum of the field \\(\\delta\\mathbf{u}=\\mathbf{u}_{1}-\\mathbf{u}_{2}\\) was then computed as \\[E_{d}(k,t)=\\frac{1}{2}\\int_{|\\mathbf{k}|=k}d\\mathbf{k}|\\hat{\\mathbf{u}}_{1}(\\mathbf{k },t)-\\hat{\\mathbf{u}}_{2}(\\mathbf{k},t)|^{2} \\tag{3}\\] for a steady state. The dashed straight line in the Fig. 1 indicates the exponential decay \\[E(k)=a\\exp-(k/k_{0}) \\tag{4}\\] The insert to the Fig. 1 has been added in order to show that the \\(k_{0}\\) from the Eq. (4) corresponds to the peak of the \\(E_{d}(k)\\) spectrum. This is an indication of a tuning of the high-wavenumber chaotic dynamics to the coherent structures with the scale \\(k_{0}\\). Ensemble weather forecasting allows to take into account the intrinsic uncertainty in numerical forecasts of chaotic systems. In recent paper Ref. [10] results of an idealized ensemble simulation of mesoscale deep-convective systems were reported. A nonhydrostatic cloud-resolving model was used in order to generate ensembles of 20 perturbed and 1 control members. The ensembles were initialized by large-scale (91-km-wavelength) moisture perturbations with random phases. A strong line of thunderstorms was developed in all cases (see Ref. [10] for more details of the model configuration and simulation strategy). Figure 2 shows vertically averaged over the layer \\(0\\leq z\\leq 16\\) km background (total) kinetic energy spectrum at 6 hours of the system development with 1km resolution (simulations were performed in a doubly periodic horizontal square domain of 512km \\(\\times\\) 512km, \\(k\\) is horizontal wavenumber). The dashed curve indicates the exponential spectral decay Eq. (4) in the log-log scales (here and in all other figures \\(\\log k=\\log_{10}k\\)). The faint straight line, indicating the '-5/3' slope in the log -log scales, is drawn in the figure for reference. Figure 3 shows corresponding vertically and ensemble averaged spectrum of perturbations in kinetic energy about the ensemble mean at the 6 hours of the system development. The dashed curve indicates the exponential spectral Figure 1: Perturbation kinetic energy spectrum for the steady isotropic turbulence. The dashed straight line indicates the exponential decay Eq. (4). decay Eq. (4) in the log-log scales. The spectral data for the Figs. 2 and 3 were taken from the Fig. 7 of the Ref. [10]. In the general case of a statistical ensemble defined by parameters \\(a\\) and \\(k_{0}\\) the ensemble averaged spectrum can be represented by \\[E(k)=\\int P(a,k_{0})\\ \\exp-(k/k_{0})\\ dadk_{0} \\tag{5}\\] with a joint probability distribution \\(P(a,k_{0})\\). If the variables \\(a\\) and \\(k_{0}\\) are statistically independent, then \\[E(k)\\propto\\int P(k_{0})\\ \\exp-(k/k_{0})\\ dk_{0} \\tag{6}\\] with distribution \\(P(k_{0})\\) of the parameter \\(k_{0}\\). Let the characteristic velocity \\(u_{0}\\) vary with the scale \\(k_{0}\\) in a scale invariant form (scaling) \\[u_{0}\\propto k_{0}^{\\alpha} \\tag{7}\\] If the vorticity \\(\\mathbf{\\omega}({\\bf x},t)\\) correlation integral \\[I_{\\omega}=\\int\\langle\\mathbf{\\omega}({\\bf x},t)\\cdot\\mathbf{ \\omega}({\\bf x}+{\\bf r},t)\\rangle_{V}\\ d{\\bf r} \\tag{8}\\] (\\(< >_{V}\\) denotes the ensemble-volume average, cf. Ref. [11]) dominates the scaling Eq. (7), then from the dimensional considerations one obtains \\[u_{0}\\propto I_{\\omega}^{1/2}k_{0}^{1/2} \\tag{9}\\] For Gaussian distribution of the characteristic velocity \\(u_{0}\\) the variable \\(k_{0}\\) has the chi-squared (\\(\\chi^{2}\\)) distribution: \\[P(k_{0})\\propto k_{0}^{-1/2}\\exp-(k_{0}/4k_{\\beta}) \\tag{10}\\] here \\(k_{\\beta}\\) is a constant. Substituting the Eq. (10) into the Eq. (6) one obtains \\[E(k)\\propto\\exp-(k/k_{\\beta})^{1/2} \\tag{11}\\] ## III Thermal Convection At thermal (Rayleigh-Benard) convection a horizontal layer of the fluid is cooled from top and heated from below. The Boussinesq approximation of the nondimensional equations describing the thermal convection is \\[\\frac{1}{\\rm Pr}\\left[\\frac{\\partial{\\bf u}}{\\partial t}+({\\bf u}\\cdot\ abla) {\\bf u}\\right]=-\ abla\\sigma+\\theta\\hat{z}+\\frac{1}{\\sqrt{\\rm Ra}}\ abla^{2}{ \\bf u}, \\tag{12}\\] \\[\\frac{\\partial\\theta}{\\partial t}+({\\bf u}\\cdot\ abla)\\theta={\\bf u_{z}}+ \\frac{1}{\\sqrt{\\rm Ra}}\ abla^{2}\\theta, \\tag{13}\\] \\[\ abla\\cdot{\\bf u}={\\bf 0}, \\tag{14}\\] where \\(Pr\\) is the Prandtl number, \\(Ra\\) is the Rayleigh number, \\(\\hat{z}\\) is the buoyancy direction, and \\(\\theta\\) is deviation of temperature from the heat conduction state [12]. Figure 4 shows kinetic energy spectrum computed for a direct numerical simulation of the thermal (Rayleigh-Benard) convection at \\(Pr=10^{2}\\) and \\(Ra=10^{7}\\) (the spectral data for this figure were taken from Fig. 10 of the Ref. [13]). The direct numerical simulation (DNS) was performed in a three-dimensional box with standard periodic boundary conditions on the lateral boundaries. On the bottom and top boundaries isothermal conditions for the temperature and free-slip conditions for velocity were used. The dashed curve in the Fig. 4 indicates the stretched exponential spectrum Eq. (11). Figure 5 shows kinetic energy spectrum computed for the Weather Research and Forecast Model [14] numerical simulation of the atmospheric moist convection without the Coriolis effect (the spectral data were taken from Fig. 10 of the Ref. [15]). Seven warm bubbles were used in the initial condition in order to initiate convection. The bubbles interact with each other under a wind shear (for more details see the Ref. [15]). The spectrum was averaged between 0 and 15 km of the height and over 4-6 Figure 3: As in Fig. 2 but for perturbation. Figure 2: Vertically and ensemble averaged background (total) kinetic energy spectrum at 6 hours of the system development (here and in all other figures \\(\\log k\\equiv\\log_{10}k\\)). hours of the evolution. The dashed curve indicates the stretched exponential spectrum Eq. (11). ## III Helicity dominated distributed chaos The vorticity dominated thermal convection (distributed chaos) has the stretched exponential kinetic energy spectrum spectrum Eq. (11) (see also Ref. [16]). Therefore, let us look at a generalization: \\[E(k)\\propto\\int P(k_{0})\\;\\exp-(k/k_{0})\\;dk_{0}\\propto\\exp-(k/k_{\\beta})^{\\beta} \\tag{15}\\] If distribution of the characteristic velocity \\(u_{0}\\) is \\({\\cal P}(u_{0})\\), then \\[{\\cal P}(u_{0})du_{0}\\propto P(k_{0})dk_{0} \\tag{16}\\] Form the Eqs. (7) and (16) one obtains \\[P(k_{0})\\propto k_{0}^{\\alpha-1}\\;{\\cal P}(u_{0}(k_{0})) \\tag{17}\\] From the Eq. (15) asymptote of \\(P(k_{0})\\) at \\(k_{0}\\rightarrow\\infty\\) can be estimated as [17] \\[P(k_{0})\\propto k_{0}^{-1+\\beta/[2(1-\\beta)]}\\;\\exp(-bk_{0}^{\\beta/(1-\\beta)}) \\tag{18}\\] with a constant \\(b\\). Then it follows from the Eqs. (7),(17) and (18) that for the Gaussian distribution \\({\\cal P}(u_{0})\\) the parameters \\(\\alpha\\) and \\(\\beta\\) are related by the equation \\[\\beta=\\frac{2\\alpha}{1+2\\alpha} \\tag{19}\\] For the helicity \\(h=({\\bf u}\\!\\cdot\\!{\\mathbf{\\omega}})\\) dominated distributed chaos the helicity correlation integral \\[I_{h}=\\int\\langle h({\\bf x},t)\\cdot h({\\bf x}+{\\bf r},t)\\rangle_{V}d{\\bf r} \\tag{20}\\] should be used instead of the vorticity correlation integral. The helicity correlation integral \\(I_{h}\\) was for the first time considered in the paper Ref. [18] and is known as the Levich-Tsinober invariant. It is usually associated with the helical waves [19]. Then it follows from the dimensional considerations: \\[u_{0}\\propto I_{h}^{1/4}k_{0}^{1/4} \\tag{21}\\] and using the Eq. (19) one obtains \\(\\beta=1/3\\), i.e. \\[E(k)\\propto\\exp-(k/k_{\\beta})^{1/3} \\tag{22}\\] Figure 6 shows kinetic energy spectrum computed for the Weather Research and Forecast Model [14] numerical simulation of the atmospheric moist convection with the Coriolis effect (the spectral data were taken from Fig. 11a of the Ref. [15]). The dashed curve in the Fig. 6 indicates the stretched exponential spectrum Eq. (22) in the log-log scales (cf. previous Section, Fig. 5). Figure 5: Kinetic energy spectrum for the Weather Research and Forecast Model numerical simulation of the atmospheric moist convection. Figure 6: As in Fig. 5 but with addition of the Coriolis effect. Figure 7 shows kinetic energy spectrum computed for a DNS of a Rayleigh-Benard-like (thermal) convection on a hemisphere (the spectral data were taken from Fig. 18 of the Ref. [20] for the stationary state spectrum). The fluid was heated at the equator and the temperature gradient between the equator and the pole produces thermal plumes near the equator which move up toward the pole and initiate a thermal convection. The dashed curve in the Fig.7 indicates the stretched exponential spectrum Eq. (22) in the log-log scales. Figure 8 shows mean spectrum of kinetic energy in 48h weather forecasts experiment at 500 hPa. The spectral data were taken from the Fig. 7b of the Ref. [21] (the forecasts were made with the Environment Canada Deterministic Weather Forecasting Systems based on ensemble-variational data assimilation). The dashed curve indicates the stretched exponential spectrum Eq. (22) and covers Meso- and Synoptic scales (the dotted vertical line indicates the Planetary scales). ## V Ensemble Weather Forecasting An ensemble forecast for an East Coast snowstorm was reported in Ref. [22]. The 100-member ensembles were generated by ensemble Kalman filter [23]. The Coupled Ocean-Atmosphere Mesoscale Prediction System - COAMPS [24] was then used in order to integrate the ensembles for 36 hours forecast. The initial conditions were slightly altered for this purpose. The forecasting simulation started at 1200UTC 25 Dec. 2010 with real atmospheric data. Figure 9 shows the ensemble and meridional averaged kinetic energy spectrum at the height 500 hPa. Figure 10 shows ensemble and meridional averaged kinetic energy spectrum of the initially generated perturbation at the 36 hours of the lead time (the Figure 8: Mean spectrum of kinetic energy in short-range weather forecasts (48-hours) experiment at 500 hPa. Figure 10: Kinetic energy spectrum of the perturbation for the 25 Dec. 2010 snowstorm at 36 hours of the lead time. Figure 7: Kinetic energy spectrum for the stationary state of the thermal convection on a hemisphere. Figure 9: Background kinetic energy spectrum for the 25 Dec. 2010 snowstorm. were taken from Fig. 6b Ref. [22]). The perturbation is the difference between one ensemble member and the ensemble mean. The dashed curves in the figures indicate the stretched exponential decay Eq. (22). The authors of the Ref. [22] believe that the perturbation growth in their simulation is a result of quasi-uniform amplification of the perturbation at all wavenumbers (see also Refs. [22],[25]-[27]). Another snowstorm was studied by the same method for the Pacific Northwest in the Ref. [25]. Figure 11 shows the mean horizontal kinetic energy spectrum at the hight 700-hPa at 1200UTC 17 Dec. 2008 (the data were taken from Fig. 13 Ref. [25]). Figure 12 shows the kinetic energy spectrum of the initially generated perturbation at the same height at the 36 hours of the lead time (the data were taken from the Fig. 14d Ref. [25]). The forecasting simulation started at 0000UTC 17 Dec. 2008 with real atmospheric data. The dashed curves in the figures 11 and 12 indicate the stretched exponential decay Eq. (11). Finally, let us consider results of a simulation experiment with eleven cases of mid-latitude convection in the central US [28]. In this experiment influence of the multiscale perturbations generated by initial conditions on the storm-scale ensemble forecasts was studied using the Weather Research and Forecasting Advanced Research Model and the Global Forecast System Model at NCEP (see for more details about the cases, configuration and simulation strategy in the Refs. [28],[29]). Figure 13 shows power spectrum of ensemble perturbations: ensemble member minus ensemble mean (averaged over all ensemble and case members), for the \\(u\\) component of wind at 900 hPa for 3h forecast time. The spectral data were taken from Fig. 2 of the Ref. [28]. The dashed curve indicates the stretched exponential decay Eq. (11). ## IV Discussion In the paper Ref. [30] a two-dimensional barotropic vorticity model with the scaling kinetic energy spectra \\(E(k)\\propto k^{-5/3}\\) and \\(E(k)\\propto k^{-7/3}\\) was used in order to estimate predictability properties of the atmospheric phenomena. A vast amount of studies was then devoted to the multiscale systems' predictability for the cases with power-law (scaling) kinetic energy spectra (see, for instance, recent Refs. [10],[15] and references therein). The power-law spectra are related to the scale-local interactions (such as cascades, for instance) [31], whereas the exponential spectra are a result of the non-local interactions directly relating very different scales [32]. This difference has serious consequences for predictability [16]. The non-local interactions, directly relating large scales with small ones, provide a basis for more efficient predictability. Figure 11: Mean horizontal kinetic energy spectrum at the height 700-hPa at 1200UTC 17 Dec. 2008. Figure 12: Kinetic energy spectrum of the perturbations for the 17 Dec. 2008 snowstorm at the 36 hours of the lead time. Figure 13: Power spectrum of ensemble perturbations: ensemble member minus ensemble mean (averaged over all ensemble and case members), for the \\(u\\) component of wind at 900 hPa for 3h forecast time. The above considered examples show that the distributed chaos approach with the stretched exponential spectra Eq. (15) seems to be more relevant for description of the the buoyancy driven fluid dynamics and, especially, for the ensemble weather forecasting [33]. ## Acknowledgement I thank A. Berera and R.D.J.G. Ho for sharing their data and discussions, and S. Vannitsem for comments. ## References * (1) U. Frisch and R. Morf, Phys. Rev., **23**, 2673 (1981). * (2) J. D. Farmer, Physica D, **4**, 366 (1982). * (3) N. Ohtomo, K. Tokiwano, Y. Tanaka et. al., J. Phys. Soc. Jpn. **64** 1104 (1995). * (4) D.E. Sigoti, Phys. Rev. E, **52**, 2443 (1995). * (5) A. Bershadskii, EPL, **88**, 60004 (2009). * (6) S.M. Osprey and M.H.P Ambaum, Geophys. Res. Lett. **38**, L15702 (2011). * (7) J.E. Maggs and G.J. Morales, Phys. Rev. Lett., **107**, 185003 (2011) * (8) A. Berera and R.D.J.G. Ho, Phys. Rev. Lett., **120**, 024101 (2018). * (9)[https://datashare.is.ed.ac.uk/handle/10283/2650](https://datashare.is.ed.ac.uk/handle/10283/2650) * (10) J.A. Weyn and D.R. Durran, J. Atmos. Sci., **75**, 3331 (2018). * (11) A. Bershadskii, arXiv:1601.07364 (2016). * (12) G. Silano, K. R. Sreenivasan and R. Verzicco, J. Fluid Mech. **662**, 409 (2010). * (13) A. Pandey, M.K. Verma, and P.K. Mishra, Phys. Rev. E, **89**, 023006 (2014). * (14) W.C. Skamarock et al., NCAR Tech. Note NCAR/TN-4751STR, 113 pp., doi:10.5065/D68S4MVH. * (15) Y.Q. Sun, R. Rotunno, and F. Zhang, J. Atmos. Sci., **74**, 185 (2017). * (16) A. Bershadskii, arXiv:1811.02449 (2018). * (17) D.C. Johnston, Phys. Rev. B, **74**, 184430 (2006). * (18) E. Levich and A. Tsinboer, Phys. Lett. A **93**, 293 (1983). * (19) E. Levich, Concepts of Physics **VI**, 239 (2009). * (20) C.-H Bruneau, et al., Phys. Rev. Fluids, **3**, 043502 (2018). * (21) M. Buehner et al., Mon. Wea. Rev., **143**, 2532 (2015). * (22) D.R Durran, and M. Gingrich, J. Atmos. Sci., **71**, 2476 (2014). * (23) J.S. Whitaker and T.M. Hamill, Mon.Wea. Rev., **130**, 1913 (2002). **71**, 2476 (2014). * (24) R.M. Hodur, Mon. Wea. Rev., **125**, 1414 (1997). * (25) D.R. Durran, P.A. Reinecke and J.D. Doyle, J. Atmos. Sci., **70**, 1470 (2013). * (26) N. Bei and F. Zhang, Quart. J. Roy. Meteor. Soc., **133**, 83 (2007). * (27) B.E. Mapes, et al., J. Meteor. Soc. Japan, **86A**, 175 (2008). * (28) A. Johnson and X. Wang, Mon. Wea. Rev., **144**, 2579 (2016). * (29) A. Johnson et al., Mon. Wea. Rev., **143**, 3087 (2015). * (30) E.N. Lorenz, Tellus, XXI (3), 289 (1969). * (31) A. S. Monin, A. M. Yaglom, Statistical Fluid Mechanics, Vol. II: Mechanics of Turbulence (Dover Pub. NY, 2007). * (32) A. Bershadskii, Phys. Fluids **20**, 085103 (2008). * (33) Statistical Postprocessing of Ensemble Forecasts (Editors: S. Vannitsem, D.S. Wilks and J.W. Messner, Elsevier, 2019).
Results of direct numerical simulations have been used to show that intensive thermal convection in a horizontal layer and on a hemisphere can be described by the distributed chaos approach. The vorticity and helicity dominated distributed chaos were considered for this purpose. Results of numerical simulations of the Weather Research and Forecast Model (with the moist convection and with the Coriolis effect) and of the Coupled Ocean-Atmosphere Mesoscale Prediction System (COAMPS) were also analysed to demonstrate applicability of this approach to the atmospheric processes. The ensemble forecasts of the real winter storms in the East Coast and Pacific Northwest as well as results of a simulation experiment with the multiscale storm-scale ensemble forecasts for eleven cases of mid-latitude convection in the central U.S. have been also discussed in this context.
Write a summary of the passage below.
163
arxiv-format/0505215v4.md
"Inhomogeneous Equation of State of the Universe: Phantom Era, Future Singularity and Crossing the P(...TRUNCATED)
"The dark energy universe equation of state (EOS) with inhomogeneous, Hubble parameter dependent ter(...TRUNCATED)
Summarize the following text.
164
arxiv-format/1810_07511v1.md
"# Modeling and Analysis of Wildfire Detection using Wireless Sensor Network with Poisson Deployment(...TRUNCATED)
"We consider a new class of wireless sensor networks, called _Wireless sensor networks_, which are e(...TRUNCATED)
Provide a brief summary of the text.
319
arxiv-format/2403_11735v5.md
"# LSKNet: A Foundation Lightweight Backbone for Remote Sensing\n\nYuxuan Li\\\\({}^{1}\\\\)\n\nXian(...TRUNCATED)
"Remote sensing images pose distinct challenges for downstream tasks due to their inherent complexit(...TRUNCATED)
Summarize the following text.
259
arxiv-format/2408_06356v1.md
"Enhancing Ecological Monitoring with Multi-Objective Optimization: A Novel Dataset and Methodology (...TRUNCATED)
"We introduce a unique semantic segmentation dataset of 6,096 high-resolution aerial images capturin(...TRUNCATED)
Give a concise overview of the text below.
231
arxiv-format/2101_12633v1.md
"_Reference: van Haren, H., H. Uchida, D. Yanagimoto, 2021. Further correcting pressure effects on_\(...TRUNCATED)
"Hadal, \\\\(>\\\\)6000 m deep shipborne Sea-Bird Electronics SBE 911plus Conductivity Temperature D(...TRUNCATED)
Write a summary of the passage below.
229
arxiv-format/2101_09126v1.md
"# Will Artificial Intelligence supersede Earth System and Climate Models?\n\nChristopher Irrgang\n\(...TRUNCATED)
"We outline a perspective of an entirely new research branch in Earth and climate sciences, where de(...TRUNCATED)
Condense the content of the following passage.
135
End of preview. Expand in Data Studio
README.md exists but content is empty.
Downloads last month
37