Dataset Viewer (First 5GB)
Auto-converted to Parquet
id
stringlengths
20
20
score
int64
1
5
normalized_score
float64
0.2
1
content
stringlengths
216
2.36M
sub_path
stringclasses
1 value
BkiUdQw4uzlhYyxW3suq
5
1
\section{Introduction} \label{intro} Object recognition, as a fundamental and key computer vision (CV) technique, has been substantially investigated over the decades. With the help of deep neural networks (DNNs), great success has been achieved regarding recognition accuracy. However, nearly all of the existing solutions work on digital images or videos that are captured by traditional cameras at a fixed rate, commonly 30 or 60 fps. Such traditional frame-based cameras encounter severe challenges in many highly demanding applications that require high speed, high accuracy, or high dynamics, such as autonomous driving, unmanned aerial vehicles (UAV), robotics, gesture recognition, etc. \cite{hwu2018adaptive}. Low frame rate means low temporal resolution and motion blur for high-speed objects. High frame rate leads to a large amount of data with substantial redundancy and a heavy computational burden inappropriate for mobile platforms. To tackle the above challenges, one of the alternative approaches is event-based sensing \cite{gallego2019event}. Typical examples include light detection and ranging (LiDAR) sensor, dynamic vision sensor (DVS), and radio detection and ranging (Radar) sensor. LiDAR is a reliable solution for high-speed and precise object detection in a wide view at long distance and has become essential for autonomous driving. DVS cameras have significant advantages over standard cameras with high dynamic range, less motion blur, and extremely small latency \cite{lichtsteiner2008128}. Traditional cameras capture images or videos in the form of frames. Event-based sensors capture images as asynchronous events. Events are created at different time instances and are recorded or transmitted asynchronously. Each event may have time resolution in the order of microseconds. Because events are sparse, the data amount is kept low even in wide-area 3D spatial sensing or high-speed temporal sensing. Although some research has been made for object recognition and classification based on event sensors, asynchronous object recognition is mostly still an open objective \cite{gallego2019event}. Existing methods usually accumulate all the events within a pre-set collection time duration to construct an image frame for recognition. This synchronous processing approach is straight forward and can employ existing DNN methods conveniently. But it overlooks the temporal asynchronous nature of the events and suffers from recognition delays caused by waiting for events accumulation. Too long an accumulation duration leads to image blurring while too short an accumulation duration loses object details. The fixed accumulation duration is a limiting factor to recognition accuracy in practice. As an example, in \cite{liu2016combined}, the DVS sensor had an event rate from 10K to 300K events per second. For fast object tracking, a short fixed accumulation duration of 20 ms was adopted, which resulted in only $200$ to $6000$ events for each $240\times 180$ image frame. Obviously, this could hardly provide sufficient resolution for the subsequent CNN-based classifier. Novel methods that can recognize objects asynchronously with the accumulation duration optimized according to the nature of object recognition tasks are more desirable. As another example, a typical $5$ Hz LiDAR sensor needs 0.2 seconds to collect all the events for a frame image. During this waiting period, a car travels at $120$ km/hour can run near $7$ meters. For timely object recognition or accident warning, methods that can process events asynchronously without waiting for the accumulation of the final events are more desirable. In this paper, we propose a new spike learning system that uses the spiking neural network (SNN) with a novel temporal coding to deal with specifically the task of asynchronous event-driven object recognition. It can reduce recognition delay and realize much better time efficiency. It can maintain competitive recognition accuracy as existing approaches. Major contributions of this paper are: \begin{itemize} \item We design a new spike learning system that can exploit both the asynchronous arrival time of events and the asynchronous processing capability of neuron networks to reduce delay and optimize timing efficiency. The first ``asynchronous" means that events are processed immediately with a first-come-first-serve mode. The second ``asynchronous" means that the network can output recognition results without waiting for all neurons to finish their work. Integrating them together can significantly enhance time efficiency, computational efficiency, and energy efficiency. \item For the first time, recognition time efficiency is defined and evaluated extensively over a list of event-based datasets as one of the major objectives of object recognition. \item We develop a novel temporal coding scheme that converts each event's asynchronous arrival time and data to SNN spike time. It makes it possible for the learning system to process events immediately without delay and to use the minimum number of events for timely recognition automatically. \item We conduct extensive experiments over a list of $7$ event-based datasets such as KITTI \cite{Geiger2012CVPR} and DVS-CIFAR10 \cite{li2017cifar10}. Experiment results demonstrate that our system had a remarkable time efficiency with competitive recognition accuracy. Over the KITTI dataset, our system reduced recognition delay by $56.3\%$ to $91.7\%$ in various experiment settings. \end{itemize} The rest of the paper is organized as follows. Section \ref{sec:related work} introduces the related work. Section \ref{sec:methods} provides the details of the proposed spiking learning system. Experiment datasets and results are given in Sections \ref{sec:datasets} and \ref{sec:evaluation}, respectively. We conclude the paper in Section \ref{sec:conclusion}. \section{Related Work} \label{sec:related work} LiDAR uses active sensors that emit their own laser pulses for illumination and detects the reflected energy from the objects. Each reflected laser pulse is recorded as an event. From the events, object detection and recognition can be carried out by various methods, either traditional feature extraction methods or deep learning methods. Behley et al. \cite{behley2013laser} proposed a hierarchical segmentation of the laser range data approach to realize object detection. Wang and Posner \cite{wang2015voting} applied a voting scheme to process LiDAR range data and reflectance values to enable 3D object detection. Gonzales et al. \cite{gonzalez2017board} explored the fusion of RGB and LiDAR-based depth maps. Tatoglu and Pochiraju \cite{tatoglu2012point} presented techniques to model the intensity of the laser reflection during LiDAR scanning to determine the diffusion and specular reflection properties of the scanned surface. Hernandez et al. \cite{hernandez2014lane} took advantage of the reflection of the laser beam to identify lane markings on the road surface. Asvadi et al. \cite{asvadi2017depthcn} introduced a convolutional neural network (CNN) to process 3D LiDAR point clouds and predict 2D bounding boxes at the proposal phase. An end-to-end fully convolutional network was used for a 2D point map projected from 3D-LiDAR data in \cite{li2016vehicle}. Kim and Ghosh \cite{kim2016robust} proposed a framework utilizing fast R-CNN to improve the detection of regions of interest and the subsequent identification of LiDAR data. Chen et al. \cite{chen2017multi} presented a top view projection of the LiDAR point clouds data and performed 3D object detection using a CNN-based fusion network. DVS, also called neuromorphic vision sensor or silicon retina, records the changing of pixel intensity at fine time resolution as events. DVS-based object recognition is still at an early stage. Lagorce et al. \cite{lagorce2016hots} utilized the spatial-temporal information from DVS to build features and proposed a hierarchical architecture for recognition. Liu et al. \cite{liu2016combined} combined gray-scale Active Pixel Sensor (APS) images and event frames for object detection. Chen \cite{chen2018pseudo} used APS images on a recurrent rolling CNN to produce pseudo-labels and then used them as targets for DVS data to do supervised learning with the tiny YOLO architecture. Built on the neuromorphic principle, SNN is considered a natural fit for neuromorphic vision sensors and asynchronous event-based sensors. SNN imitates biological neural networks by directly processing spike pulses information with biologically plausible neuronal models \cite{maass1997networks}\cite{ponulak2011introduction}. Regular neural networks process information in a fully synchronized manner, which means every neuron in the network needs to be evaluated. Some SNNs, on the contrary, can work in asynchronous mode, where not all neurons are to be stimulated \cite{susi2018fns}. The attempts of applying SNN for neuromorphic applications include pattern generation and control in neuro-prosthetics systems \cite{ponulak2006resume}, obstacle recognition and avoidance\cite{ge2017spiking}, spatio- and spectro-temporal brain data mapping \cite{kasabov2014neucube}, etc. Attempts were also made to use SNN for object detection and recognition, either over traditional frame-based image data \cite{cannici2019asynchronous,zhang2019tdsnn,lee2016training}, or over event-based LiDAR and DVS data \cite{zhou2020deepscnn}\cite{wu2019direct}. A class of SNNs was developed with temporal coding, where spiking time instead of spiking rate or spiking count or spiking probability was used to encoding neuron information. The SpikeProp algorithm \cite{bohte2002error} described the cost function in terms of the difference between desired and actual spike times. It is limited to learning a single spike. Supervised Hebbian learning \cite{legenstein2005can} and ReSuMe \cite{ponulak2010supervised} were primarily suitable for the training of single-layer networks only. As far as we know, all the existing SNN works over event-based sensor data need a pre-set time duration to accumulate events into frame-based images before recognition. How to break this limitation to develop SNNs with the full asynchronous operation is still an open problem. \section{A Spike Learning System} \label{sec:methods} \begin{figure}[t] \centering \includegraphics[width=1\linewidth]{fig/systemstages.PNG} \caption{Flow diagram of the proposed spike learning system.} \label{fig:flowdiagram} \end{figure} Fig.~\ref{fig:flowdiagram} shows the workflow of our proposed spike learning system. The pipeline consists of three major blocks: 1) Pre-processing of asynchronous events from event-based sensors; 2) Temporal coding of the pre-processed events into SNN input spike time; 3) Object recognition with SNN. \subsection{Pre-processing of Events} The event data in standard LiDAR datasets are usually given as tuple $(x, y, z, r)$, where $(x, y, z)$ is the location of the object, and $r$ is the received light intensity. The events form a point cloud at a certain time-stamp. Existing LiDAR datasets usually contain this time-stamp only instead of event timing. For real-time applications, events may be collected asynchronously. Each event comes with its own arrival time $t_a$, which is the summation of laser pulse receiving time, LiDAR signal processing time, and data transmission time from the sensor to the learning system. With voxelization or other similar techniques \cite{zhou2020deepscnn}, we can compress the events data by quantizing the large spatial region into a small and fixed 3D integer grid $(x_v, y_v, z_v)$. For example, many papers quantize the KITTI dataset point cloud into a $768\times 1024 \times 21$ grid. Let the spatial quantization step sizes in the three dimensions be $\Delta x$, $\Delta y$, and $\Delta z$, respectively. Then the event $(x, y, z, r)$ falls into the voxel \begin{eqnarray} & {\cal V}(x_v, y_v, z_v) = \{(x, y, z): x_v \Delta_x \leq x < (x_v+1)\Delta_x, \nonumber \\ & y_v \Delta_y \leq y < (y_v+1)\Delta_y, z_v \Delta_z \leq z < (z_v+1)\Delta_z\}. \end{eqnarray} A voxel may have multiple or zero events due to events sparsity. Its value can be set as the number of falling events, light intensity $r$, object distance $\sqrt{x^2+y^2+z^2}$, or laser light flying time $2\sqrt{x^2+y^2+z^2}/c$ with light speed $c$ \cite{zhou2020deepscnn}. In our experiments, we set the voxel value as \begin{equation} D(x_v, y_v, z_v) = \left\{ \begin{array}{ll} \frac{2\sqrt{x^2+y^2+z^2}}{c}, & (x, y, z) \in {\cal V}(x_v, y_v, z_v) \\ 0, & {\rm otherwise} \end{array} \right. \end{equation} We use the first arriving event $(x, y, z, r)$ inside this voxel to calculate $D(x_v, y_v, z_v)$. If no events falling inside this voxel, then $D(x_v, y_v, z_v)=0$. For DVS cameras, each event is recorded as $(x, y, t, p)$, where $(x, y)$ is the pixel coordinate in 2D space, $t$ is the time-stamp or arrival time of the event, and $p$ is the polarity indicating the brightness change over the previous time-stamp. The polarity is usually set as $p(x, y, t) = \pm 1$ or $p(x, y, t) = \{0, 1\}$ \cite{chen2019multi}. Pixels without significant intensity change will not output events. DVS sensors are noisy because of coarse quantization, inherent shot noise in photos, transistor circuit noise, arrival timing jitter, etc. By accumulating the event stream over an exposure time duration, we can obtain an image frame. Specifically, accumulating events over exposure time from $t_0$ to $t_K$ gives the image \begin{equation} D(x_v, y_v) = \sum_{t=t_0}^{t_K} p(x_v, y_v, t) + I(x_v, y_v), \label{eq3.5} \end{equation} where $(x_v, y_v)$ is the pixel location, and $I(x_v, y_v)$ is the initial image at time $t_0$. We can set $I(x_v, y_v)=0$ from the start. Obviously, longer exposure duration $t_K-t_0$ leads to better image quality for slow-moving objects but blurring for fast-moving objects. Most existing methods pre-set an exposure duration, such as $100$ milliseconds for DVS-CIFAR10, to construct the image $D(x_v, y_v)$ for the subsequent recognition. In contrast, our proposed system does not have such a hard exposure time limitation and can automatically give recognition outputs within the best exposure time duration for the tasks. \subsection{Temporal Coding for Spiking Neural Networks} \label{sec:snn_model} In SNNs, neurons communicate with spikes or action potentials through layers of the network. When a neuron's membrane potential reaches its firing threshold, the neuron will emit a spike and transmit it to other connected neurons \cite{ponulak2011introduction}. We adopt the spike-time-based spiking neuron model of \cite{mostafa2018supervised}. Specifically, we use the non-leaky integrate-and-fire (n-LIF) neuron with exponentially decaying synaptic current kernels. The membrane potential is described by \begin{equation} \frac{dv_{j}(t)}{dt} = \sum_{i} w_{ji} \kappa(t-t_{i}), \label{eq3.10} \end{equation} where $v_{j}(t)$ is the membrane potential of neuron $j$, $w_{ji}$ is the weight of the synaptic connection from the input neuron $i$ to the neuron $j$, $t_i$ is the spiking time of the neuron $i$, and $\kappa(t)$ is the synaptic current kernel function. The value of neuron $i$ is encoded in the spike time $t_i$. The synaptic current kernel function determines how the spike stimulation decays over time. We use exponential decaying as given below \begin{equation} \kappa(t)=u(t)e^{-\frac{t}{\tau}}, \end{equation} where $\tau$ is the decaying time constant, and $u(t)$ is the unit step function defined as \begin{equation} u(t) = \left\{ \begin{array}{ll} 1, \;\;\; & \text{if $t\geq0$}\\ 0, \;\;\; & \text{otherwise} \end{array} \right. . \end{equation} Fig. \ref{fig:vmem} illustrates how this neuron model works. A neuron is only allowed to spike once unless the network is reset or a new input pattern is presented. \begin{figure}[t] \centering \includegraphics[width=0.8\linewidth]{fig/spiking_mech.pdf} \caption{The working principle of the n-LIF neuron model. (a) Four input neurons spike at time $t_i$, $i=1, \cdots, 4$. (b) Synaptic current $\kappa (t-t_i)$ jumps and decays over time. (c) Membrane voltage potential $v_j(t)$ rises towards the firing threshold. (d) The neuron $j$ emits a spike at time $t_j=t_{out}$ when the threshold is crossed.} \label{fig:vmem} \end{figure} An analog circuit to implement this neuron was designed by \cite{zhou2020deepscnn} and was shown to be highly energy efficient. For training or digital (software) implementations, however, we do not need to emulate the operation (\ref{eq3.10}). Instead, we skip the dynamic time-evolution and consider only the member voltage at spiking time $t_{j}$. For this purpose, solving (\ref{eq3.10}) we get \begin{equation} v_j(t_j) = \sum_{i\in C} w_{ji} {\tau} \left( 1- e^{-\frac{t_{j}-t_i}{\tau}} \right), \end{equation} where the set $C=\{i: t_i < t_{j}\}$ includes all (and only those) input neurons that spike before $t_{j}$. Larger $\tau$ leads to lower $v_j(t_j)$. For any $\tau$, we can find an appropriate voltage threshold $v_j(t_j)$ so that the activate input neuron set $C$ and the output spike time $t_j$ do not change. Therefore, in digital implementation, we can simply set both the voltage threshold and $\tau$ to 1. With $v_j(t_j)=1$, the neuron $j$'s spike time satisfies \begin{equation} e^{t_{j}} = \sum_{i\in C} e^{t_{i}} \frac{w_{ji}} {\sum_{\ell \in C} w_{j\ell}-1}. \label{eq3.20} \end{equation} In software SNN implementation, we can use directly $e^{t_i}$ as neuron value, calculate $w_{ji}/({\sum_{\ell \in C} w_{j\ell}-1})$ as weights, and (\ref{eq3.20}) is then the input-output equation of a feed-forward fully connected neural network layer. We do not need other nonlinear activations because the weights are themselves nonlinear. At the first (or input) layer, we need to encode the pre-processed event data $D(x_v, y_v, z_v)$ into spike time $t_i$. Existing methods such as \cite{zhou2020deepscnn} simply let $t_i = D(x_v, y_v, z_v)$, which when applied onto the event-driven data will ignore the inherent temporal information and the asynchronous property of the events. To fully exploit the asynchronous events property, we propose the following new temporal coding scheme that encodes both event value $D(x_v, y_v, z_v)$ and the arrival time $t_a$ of each event. Consider a LiDAR event $(x, y, z, r)$ arriving at time $t_a$. During preprocessing, assume it is used to update the voxel value $D(x_v, y_v, z_v)$. Also assume that this voxel $(x_v, y_v, z_v)$ corresponds to the $i$th input neuron in the SNN input layer. This neuron's spiking time is then set as \begin{equation} t_i = \max\{\beta, t_a\} + \alpha D(x_v, y_v, z_v). \label{eq3.25} \end{equation} where $\beta$ is a time parameter used to adjust delayed processing of early arrival events, and $\alpha$ is a constant to balance the value of $D(x_v, y_v, z_v)$ and $t_a$ so that the two terms in $t_i$ are not orders-of-magnitude different. The first term in the right of (\ref{eq3.25}) encodes the event arrival time. If $\beta = 0$, then there is no delay in encoding the arrival time: all the events are processed immediately upon arrival. If $\beta$ is set to the image frame time, we have the conventional synchronous event processing scheme where object recognition does not start until all the events are accumulated together. We can use $\beta$ to control the exposure time $t_K$. The second term encodes the event value. If $\alpha$ is set as $0$, then only the event arrival time is encoded. In this case, $\beta$ should be set as a small value, so that the temporal information could be fully exploited (e.g. if $\beta = 0$, then $t_i = t_a$). As a matter of fact, (\ref{eq3.25}) is a general temporal encoding framework. Various encoding strategies can be realized with appropriate parameters $\alpha$ and $\beta$, such as encoding event value only, encoding event arrival time only, and encoding both event value and arrival time. For DVS sensors, assume similarly that the pixel $(x_v, y_v)$ is the $i$th input neuron. During the exposure time, while events are being accumulated into $D(x_v, y_v)$ according to (\ref{eq3.5}), the spiking time is set as the smallest $t_i$ that satisfies \begin{equation} \sum_{t=t_0}^{t_i} p(x_v, y_v, t) + I(x_v, y_v) \geq \Gamma(t_i). \label{eq3.26} \end{equation} $\Gamma(t)$ is a threshold function that can be set as a constant $\alpha$ or a linear decreasing function \begin{equation} \Gamma(t) = \beta(t_K-t), \end{equation} with rate $\beta$. If $\beta=0$, then the neuron spikes immediately when the pixel value is positive. A sufficiently large $\beta$ effectively makes us wait and accumulate all the events to form a frame image before SNN processing. In this case, we fall back to the traditional synchronous operation mode. If a pixel's intensity accumulates faster, then it spikes earlier. If the pixel accumulates slower or even stays near 0, then it may never spike. With the proposed temporal coding scheme, the system would be able to output a recognition decision asynchronously after some accumulation time $t$ between $t_0$ and $t_K$, and the event accumulation operation stops at $t$. Only the events during $t_0$ and $t$ are used for inference. Therefore, $t_0$ and $t_K$ can be simply set as the start and end time of the recognition task, such as an image frame time. This avoids the headache of looking for the best pre-set accumulation time. Note that no pre-set accumulation time can be optimal for all images. Some images need longer accumulation, while some other images need short accumulation. The proposed temporal coding enables our system to resolve this challenge in a unique way: We just pre-set a very large accumulation time $t_K$, and the system can automatically find the optimal accumulation time $t$ used for each image. In other words, instead of the fixed pre-set accumulation time, a unique event accumulation time $t$ (well before $t_K$) can be found for each image automatically by the system. \subsection{Object recognition with SNN} Many other SNNs use more complex neuron models or use spike counts or rates as neuron values. In comparison, our neuron model is relatively simple and has only a single spike, which makes our SNN easier to train and more energy-efficient when implemented in hardware. Based on the neuron input/output expression (\ref{eq3.20}), the gradient calculation and gradient-descent-based training become nothing different from conventional DNNs. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{fig/snn.pdf} \caption{SNN with spiking fully-connected (FC) layers for object recognition.} \label{fig:implementation} \end{figure} \begin{figure}[t] \centering \includegraphics[width=1\linewidth]{fig/netarchitecture.PNG} \caption{SNN with multiple SCNN layers and FC layers for object recognition.} \label{fig:scnn} \end{figure} Based on (\ref{eq3.20}), we can implement both spiking fully-connected (FC) layers and spiking convolutional neural network (SCNN) layers just as conventional DNN FC and CNN layers. Fig. \ref{fig:implementation} shows an SNN with two FC layers: one input layer with our temporal coding, one hidden FC layer, and one output FC layer. SCNN layers work in a similar way as traditional CNN but are equipped with spiking kernels. For more complex tasks, we can apply multiple SCNN layers and FC layers, as shown in Fig. \ref{fig:scnn}. Pooling layers such as max-pooling and average-pooling can also be used, which is the same as conventional DNNs. The standard back-propagation technique can be used to train the weights $w_{ji}$. For real-time object recognition, the SNN spiking time range is on the same scale as the event arrival time, specifically, starting at $t_0$ and ending at $t_K$. The SNN takes in spikes sequentially according to their arrival time. Each neuron keeps accumulating the weighted value and comparing it with the threshold until the accumulation of a set of spikes can fire the neuron. Once the neuron spikes, it would not process any further input spikes unless reset or presented with a new input pattern. The recognition result is made at the time of the first spike among output neurons. Smaller $t_j$ or $e^{t_j}$ means stronger classification output. Also, smaller $t_j$ as output means that inference delay can be reduced, which is an outcome of the asynchronous working principle of SNN. Define the input of SNN as ${\bf z}_0$ with elements $z_{0,i} = e^{t_i}$, and the output of SNN as ${\bf z}_{L}$ with elements $z_{L, i}=e^{t_{L, i}}$. Then we have ${\bf z}_{L} = f({\bf z}_0; {\bf w})$ with nonlinear mapping $f$ and trainable weight ${\bf w}$ which includes all SNN weights $w_{ji}$ and the temporal coding parameter $\beta$. Let the targeting output be class $c$, then we train the network with the loss function \begin{equation} {\cal L} ({\bf z}_L, c) = -\ln \frac{z_{L, c}^{-1}}{\sum_{i\neq c} z_{L, i}^{-1}} + k \sum_j \max\left\{0, 1-\sum_i w_{ji}\right\}, \label{eq3.50} \end{equation} where the first term is to make $z_{L, c}$ the smallest (equivalently $t_{L, c}$ the smallest) one, while the second term is to make sure that the sum of input weights of each neuron be larger than $1$. The parameter $k$ adjusts the weighting between these two terms. The training of (\ref{eq3.50}) can be conducted with the standard backpropagation algorithm similar to conventional DNNs, just as \cite{mostafa2018supervised}. Nevertheless, a problem of \cite{mostafa2018supervised} is that its presented algorithm did not require $t_j> t_i$ for $i \in {\cal C}$. This led to $t_j \leq t_i$ or even negative $t_j$ that was not practical. We corrected this problem and implemented the training of (\ref{eq3.50}) in the standard deep learning platform Tensorflow. \section{Evaluation Datasets} \label{sec:datasets} To investigate the effectiveness of our proposed system, we evaluated it on a list of $7$ LiDAR and DVS datasets introduced below. Their sample images are shown in Fig. \ref{fig:imagesamples}. \begin{figure}[t] \centering \frame{\includegraphics[width=0.9\linewidth]{fig/datasets_new.pdf}} \caption{Sample images of LiDAR and DVS datasets used in this paper.} \label{fig:imagesamples} \end{figure} \subsection{LiDAR Datasets} \paragraph{\bf KITTI Dataset} In order to evaluate the ability of our proposed system on complex real-life data in the autonomous driving scenario, we trained and tested it on the KITTI dataset \cite{Geiger2012CVPR}. We utilized the KITTI 3D object detection benchmark, specifically the point clouds data collected by Velodyne HDL-64E rotating 3D laser scanner, which provided 7481 labeled samples. However, the provided point clouds data can not be directly used because all label annotations (location, dimensions, observation angle, etc.) are provided in camera coordinates instead of Velodyne coordinates. To convert point clouds data onto Velodyne coordinates, we first mapped point clouds data $(x, y, z)$ to an expanded front view $(x_{\rm front}, y_{\rm front})$, whose size was determined by the resolution of the LiDAR sensor. We used the transformation \begin{equation} x_{\rm front} = \left \lfloor -\frac{\arctan\frac{y}{x}} {R_{h}} \right \rfloor, y_{\rm front} = \left \lfloor -\frac{\arctan\frac{z}{\sqrt{x^2+y^2}}} {R_{v}} \right \rfloor \end{equation} where $R_{h}$ and $R_{v}$ are the horizontal and vertical angular resolution in radians, respectively. In order to project the label annotations onto the front view plane, we first calculated the bounding box in camera coordinates and transferred the corners to Velodyne coordinates by multiplying the transition matrix $T_{c2v}$. The object location was mapped onto the front view similarly, as illustrated in Fig. \ref{fig:kitti_process}. Based on the front view locations, objects were cropped with a fixed size to establish the recognition dataset. \begin{figure}[tbp!] \centering \includegraphics[width=0.6\linewidth]{fig/kitti_process.pdf} \caption{Transformation of KITTI 3D Point Clouds into 2D LiDAR front view images.} \label{fig:kitti_process} \end{figure} The changing from 3D to 2D would reduce the computational complexity of recognition. We also artificially generated arrival time $t_a$ for each event $(x, y, z)$ linearly with respect to $x$. The processed KITTI dataset contains 32456 training samples and 8000 testing samples covering 8 classes of KITTI objects. \paragraph{\bf N-Sydney} The N-Sydney Urban Objects dataset \cite{chen2014performance} is an event-based LiDAR dataset containing 26 object classes. We considered only the following 9 classes: Van, Tree, Building, Car, Truck, 4wd, Bus, Traffic light, and Pillar. We artificially generated arrival time $t_a$ for each event. \subsection{DVS Datasets} \paragraph{\bf DVS-CIFAR10} The DVS-CIFAR10 dataset \cite{li2017cifar10} is converted from the popular CIFAR10 data set. It has 10000 samples covering 10 classes. We split the dataset into a training set of $8000$ samples and a testing set of $2000$ samples while adopting the full $128 \times 128$ pixel scene. \paragraph{\bf DVS-barrel} The DVS-barrel dataset has 6753 samples with 36 classes \cite{orchard2015hfirst}, which we split into a training set of 3453 samples and a test set of 3000. We used the ``ExtractedStabilized'' version of the dataset, which, rather than using the full $128 \times 128$ pixel scene, extracts individual characters into $32 \times 32$ pixel scenes. \paragraph{\bf N-MNIST, N-Caltech101, Hand-Gesture} The N-MNIST \cite{orchard2015hfirst} and N-Caltech101 \cite{zhang2006svm} datasets are the conversion of two popular image datasets MNIST and Caltech101. The N-MNIST has $10$ object classes and image size $28 \times 28$. Caltech101 has 100 object classes plus a background class. The image size on average is $200 \times 300$. The Hand-Gesture dataset \cite{huang2011gabor} is a DVS dataset with an image size of $120\times 320$. \section{Evaluation} \label{sec:evaluation} \subsection{Experiment Setup} Table \ref{tbl:snn_config} lists the network configurations we designed based on the proposed spiking learning system architecture for the datasets listed in Fig. \ref{fig:imagesamples}. \begin{table}[t] \centering \caption{Our network models. Sample notation explained: F256 (fully connected layer with 256 spiking neurons), C5-48 (convolutional layer with 48 spiking kernels of size $5 \times 5$), AP (average-pooling layer with stride 2).} \label{tbl:snn_config} \scalebox{0.9}{ \begin{tabular}{r|l} \hline KITTI & (50$\times$118$\times$1): C5-48, C5-24, F256, F8 \\ N-Sydney & (32$\times$32$\times$32): C5-32, C3-32, F128, F9 \\ \hline \hline DVS-barrel & (input 1024): F2000, F36 \\ N-MNIST & (28$\times$28$\times$1): C5-32, C5-16, F10 \\ N-Caltech101 & (200$\times$300): C5-16, C3-8, F64, F101 \\ HandGesture & (120$\times$320): C5-32,C3-48,C3-16,F64,F10 \\ \hline & Small: C3-32, AP, C3-48, AP, F256, F10 \\ DVS- & Medium: C3-32, C3-48, AP, C3-64, AP, \\ CIFAR10 & \hspace{1.5cm} F256, F10 \\ (128$\times$128) & Large: C3-32, C3-64, AP, C3-128, \\ & \hspace{1cm} C3-256, AP, F1024, F10 \\ \hline \end{tabular}} \end{table} All datasets were tested over models with multiple SCNN and FC layers because our experiments showed that they were much better than simpler SNNs with FC layers only. As for the KITTI dataset, a model with two SCNN layers and two FC layers was employed. The input size was $50\times118\times1$. The kernel size for the SCNN layers was $5\times5$, with a stride size of 2. The numbers of kernels were 48 and 24, respectively. The output from the second SCNN layer had a size $13\times30\times24$ and was flattened and passed to the first FC layer (with 256 spiking neurons). The second FC layer had 8 output channels. The batch size was set to 10, and the initial learning rate was 1e-3 with decay. Adam optimizer was adopted for training. The N-Sydney Urban Object dataset and the N-Caltech 101 dataset were tested over similar models with two SCNN layers and two FC layers. The Hand-Gesture dataset was tested over a model with three SCNN layers and two FC layers. The DVS-CIFAR10 dataset was considered the most challenging one among these datasets. It is also much more challenging than the conventional frame-based CIFAR10 due to noisy samples and a single intensity channel. We created three different spiking network structures for a fair comparison with \cite{wu2019direct}, which also created three SNN structures to compare with other SNN and DNN results and published the best accuracy so far. The training employed Adam as the optimizer with a batch size of 8, and 100 training epochs with an exponentially decaying learning rate. By manipulating learning rates, the fastest convergence was obtained when the learning rate started at 1e-2 in epoch 1 and ended at 1e-5 in epoch 100. Note that we tried various optimizers such as SGD and Adam on each model during training. Their training performance did not show a significant difference. \subsection{Experiment Results} We used recognition accuracy ($A$) and recognition (inference) delay ($D$) as performance metrics to evaluate and compare our models with the state of the arts. Based on $A$ and $D$ we calculated performance gain ($G$) of our model as \begin{equation} G_{\rm acc} = \frac{A_{\rm ours}-A_{\rm ref}}{A_{\rm ref}}, \;\;\; G_{\rm time} = \frac{D_{\rm ref} - D_{\rm ours}}{D_{\rm ref}}, \end{equation} where $G_{\rm acc}$ and $G_{\rm time}$ are the accuracy gain and time efficiency gain of our model over some reference model, respectively. The time efficiency gain is also the ratio of delay/latency reduction. The delay includes both the delay of the inference algorithm, which we call ``inference delay", and the delay caused by waiting for the asynchronous arrival of events. Their sum is the ``total delay". Although we do not directly evaluate computational complex and energy efficiency, we must point out that SNN-based models, in general, have lower computational complexity and higher energy efficiency, as pointed out in \cite{zhou2020deepscnn} and many other SNN publications. \paragraph{\bf KITTI} We take the transformed KITTI dataset as a representative of LiDAR datasets to interpret the evaluation results in detail. The results of other datasets are provided at the end of this section. For the KITTI dataset, we compared our proposed system against the VGG-16 model of conventional CNN. To make the processed KITTI dataset work on VGG-16, we replicated the single intensity channel into three RGB color channels and resized the image from $50\times118$ to $128 \times 128$. We utilized the VGG-16 pre-trained on ImageNet for transfer learning. Table \ref{tbl:acc_kitti} shows that our system not only achieved better accuracy (with a gain of $G_{\rm acc}=5.46\%$ over VGG-16), but also had much smaller latency or higher time efficiency (with a gain of $G_{\rm time}$ between $56.3\%$ and $91.7\%$ over VGG-16). The reason for VGG-16 to have relatively lower testing accuracy might be because the processed KITTI dataset had a single intensity channel and had a smaller image size than the ideal VGG-16 inputs. A smaller network quite often can achieve better accuracy performance than a complex network over a relatively small dataset. \begin{table}[t] \caption{Comparison of our SNN model with the VGG-16 model over the KITTI dataset for accuracy and timing efficiency.} \label{tbl:acc_kitti} \begin{center} \scalebox{0.9}{ \begin{tabular}{cc c c} \hline Model & VGG-16 & Our Model & Gain \\ \hline Accuracy & 91.6\% & 96.6\% & 5.46\% \\ \hline Inf. Delay (CPU) & 38 ms & 8.5 ms & 77.6\% \\ Total Delay (CPU) & 63 ms & 27 ms & 57.1\% \\ \hline Inf. Delay (GPU) & 23 ms & 1.9 ms & 91.7\% \\ Total Delay (GPU) & 48 ms & 21 ms & 56.3\% \\ \hline \end{tabular}} \end{center} \end{table} Next, let us focus on the delay and time efficiency comparison results. To obtain an inference delay, we set temporal coding parameter $\beta$ for our system to work in synchronous mode, which gave us the SNN's average running time for an image. We did this over both Intel Core i9 CPU and NVIDIA GeForce RTX 2080Ti GPU. On GPU, our model spent 1.9 ms (millisecond) while VGG-16 needed 23 ms. Our model ran faster because our model both was much simpler and ran asynchronously (inference may terminate at a very small short spiking time $t_{out}$ at the last layer). To obtain the total delay, we set a temporal coding parameter to work in the asynchronous mode to exploit both the asynchronous event arrival property and the SNN's asynchronous processing property. By contrast, VGG-16 had to wait until all the events were received before starting processing. Since KITTI LiDAR generated $360^\circ$ image frames at the speed of $10$ Hz, the collection of all events for a $90^\circ$ field-of-view image had a delay of $25$ ms. So the total delay on GPU was $25+23=48$ ms. In contrast, our model had a total delay of $21$ ms on average, a delay reduction (and time efficiency gain) of $56.3\%$. We can also see that the asynchronous events arrival time dominated the total delay. Our model just needed a fraction of events for inference, so it had a much smaller delay. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{fig/effi_lidar_new.png} \caption{Distributions of (a) inference time and (b) ratio of events used in inference, on LiDAR datasets.} \label{fig:effi_lidar} \end{figure} \begin{figure}[t] \centering \includegraphics[width=\linewidth]{fig/effi_dvs_new.png} \caption{Distributions of (a) inference time and (b) ratio of events used in inference, on DVS datasets.} \label{fig:effi_dvs} \end{figure} The distribution of the ratio of events used by our model on the KITTI dataset is shown in Fig. \ref{fig:effi_lidar}(b). On average, $76\%$ of events were used in inference. These figures also demonstrate that our system worked asynchronously and selected automatically various numbers of events in different recognition tasks. In addition, we calculated the ``ideal inference delay", which is defined as the difference between the SNN output spike time and the first SNN input spike time. Obviously, ``ideal" means that we skip hardware processing delay. The distribution is shown in Fig. \ref{fig:effi_lidar}(a). An interesting observation was that the ``ideal inference time" was approximately $1.9$ ms, the same as what we obtained from practical GPU running time in Table \ref{tbl:acc_kitti}. This may not be surprising because the SNN spiking time duration is usually much longer than GPU executing duration for simple networks. Asynchronous SNN operation can surely reduce recognition/inference delay. Based on the above interesting observation, we propose event-ratio as an approximation of time efficiency gain. The event-ratio is defined as the proportion of contributing events (input events consumed before the decision) for recognizing the object in an image frame to all the events corresponding to this image frame, i.e., \begin{equation} r_{\rm event} = \frac{N_{\rm contributing}}{N_{\rm all}}. \end{equation} The estimated time efficiency gain is defined as \begin{equation} \hat{G}_{\rm time} \approx 1 - r_{\rm event}. \label{eq4.1} \end{equation} The estimation is accurate when the computation delay of CPU and GPU is negligible compared with the event accumulation time duration, which is often true. This way of calculating the time efficiency gain resolves a big hurdle in DVS datasets because DVS datasets usually do not have realistic events timing. Even for LiDAR datasets, event timing could only be constructed artificially. The time efficiency gain (\ref{eq4.1}) catches the key factor, i.e., asynchronous operation, and skips the variations caused by non-ideal software coding and platforms. Adopting this approximation, the accuracy and time efficiency of our models on all the datasets are listed in Table \ref{tbl:acc_delay}. Gain $G_{\rm acc}$ was calculated by choosing the accuracy of the best model used in our comparison as $G_{\rm ref}$, see Table \ref{tbl:comp_cifar10} and Table \ref{tbl:acc_sydney}. From the table, we can see that our system, in general, was competitive with the state-of-the-art models in recognition accuracy. More importantly, our system needed only a fraction of the events, which lead to a $34\%$ to $75\%$ gain in time efficiency. \begin{table}[t] \caption{Summary of accuracy and time efficiency of our system.} \label{tbl:acc_delay} \begin{center} \scalebox{1}{ \begin{tabular}{c | c c c c} \hline Dataset & Accuracy & $G_{\rm acc}$ & Event & $\hat{G}_{\rm time}$ \\ & & & Ratio & \\ \hline KITTI & 96.62\% & 5.46\% & 0.76 & 24\% \\ N-Sydney & 78.00\% & 6.85\% & 0.62 & 38\% \\ \hline DVS-CIFAR10 & 69.87\% & 8.71\% & 0.38 & 62\% \\ DVS-Barrel & 99.52\% & 4.32\% & 0.25 & 75\% \\ N-MNIST & 99.19\% & -0.0\% & 0.37 & 63\% \\ N-Caltech101 & 91.89\% & -1.6\% & 0.45 & 55\% \\ Hand-Gesture & 99.99\% & 0.91\% & 0.41 & 59\% \\ \hline \end{tabular}} \end{center} \end{table} \paragraph{\bf DVS-CIFAR10} We take DVS-CIFAR10 as an example to detail the evaluations on DVS datasets. The results of all other DVS datasets are given at the end of this section. Table \ref{tbl:comp_cifar10} shows that our model had 69.87\%\ recognition accuracy, higher than competitive models listed. Note that the competitive models were selected carefully according to their importance for the development of this dataset and their state-of-art performance. Their accuracy values were cited directly from the papers. Their methods were also listed to show clearly the performance difference of conventional machine learning methods, DNN/CNN, and SNN/SCNN. The reduction of the delay was even more striking based on the description of Table \ref{tbl:acc_delay}. From Fig. \ref{fig:effi_dvs}(b), we can see that our model used only $38\%$ of events for inference, which means a delay reduction of over $62\%$. From Fig. \ref{fig:effi_dvs}(a), our model on average used $2.35$ ms for inference based on our artificially created timing information of the events similar to KITTI. \begin{table}[t] \centering \caption{Comparison with existing results on DVS-CIFAR10} \label{tbl:comp_cifar10} \scalebox{0.9}{ \begin{tabular}{cccc} \hline Model & Method & Accuracy \\ \hline Zhao 2014 \cite{zhao2014feedforward} & SNN & 22.1\% \\ Lagorce 2016 \cite{lagorce2016hots} & HOTS & 27.1\% \\ Sironi 2018 \cite{sironi2018hats} & HAT & 52.4\% \\ Wu 2019 \cite{wu2019direct} & SNN & 60.5\% \\ Our model (sml) & SCNN & 60.8\% \\ Our model (mid) & SCNN & 64.3\% \\ Our model (large) & SCNN & 69.9\% \\ \hline \end{tabular}} \end{table} The effect of different network configurations on the training process was investigated and depicted in Fig. \ref{fig:loss_trend}, where the training loss was calculated with 8 images randomly sampled from the training set. When being trained on the DVS-CIFAR10 dataset, the larger model converged faster than the smaller one, which might be because of the better capability of larger SNN in learning data representations. \begin{figure}[t] \centering \includegraphics[width=0.9\linewidth]{fig/loss_on_cifar_crop.pdf} \caption{Training convergence of the proposed models over the DVS-CIFAR10 dataset.} \label{fig:loss_trend} \end{figure} \paragraph{\bf Other datasets} For the rest of the datasets, N-Sydney, DVS-barrel, N-MNIST, N-Caltech101, and Hand-Gesture, the accuracy comparisons with the state-of-the-art models are listed in Table \ref{tbl:acc_sydney}. The table also listed the major network architecture. We can see that our models provided competitive recognition accuracy. For challenging datasets with relatively lower recognition accuracy, such as N-Sydney, our model had much higher performance. For the popular N-MNIST dataset, our model had relatively low computational complexity because the number of trainable weights (2.2e4) was lower than other models. The event-ratio/inference-time distributions of our SNN model on LiDAR and DVS datasets are shown respectively in Fig. \ref{fig:effi_lidar} and Fig. \ref{fig:effi_dvs}. On average, $62\%$ of events were needed for the N-Sydney dataset, while the DVS datasets mostly took less than $50\%$ of events to accomplish the inference. It can also be seen that the inference time for most samples was in the range of $1$ to $3$ ms, demonstrating a strong performance in time efficiency. \begin{table*}[t] \caption{Performance comparison over N-Sydney, N-MNIST, N-Caltech101 and Hand-Gesture datasets.} \label{tbl:acc_sydney} \begin{center} \scalebox{1}{ \begin{tabular}{c | c c c} \hline Dataset & Model & Accuracy & Method \\ \hline \hline N-& Chen'14 \cite{chen2014performance} & 71\% & GFH+SVM \\ Sydney & Maturana'15 \cite{maturana2015voxnet} & 73\% & CNN \\ & Our Model & 78\% & SCNN \\ \hline \hline & Perez'13 \cite{perez2013mapping} & 95.2\% & CNN \\ DVS- & Perez'13 \cite{perez2013mapping} & 91.6\% & SCNN \\ Barrel & Orchard'15 \cite{orchard2015hfirst} & 84.9\% & HFirst \\ & Our Model & 99.5\% & SCNN \\ \hline \hline & Orchard'15 \cite{orchard2015hfirst} & 71.20\% & HFisrt, 1.2e5 \\ & Neil'16 \cite{neil2016phased} & 97.30\% & LSTM, 4.5e4 \\ N- & Lee'16 \cite{lee2016training} & 98.70\% & SNN, 1.8e6 \\ MNIST & Shreshtha'18 \cite{shrestha2018slayer} & 99.20\% & SNN, 7.2e4 \\ & Wu'18 \cite{wu2018spatio} & 98.78\% & SNN, 1.8e6 \\ & Our Model & 99.15\% & SCNN, 2.2e4 \\ \hline \hline & Zhang'06 \cite{zhang2006svm} & 66.23\% & SVM-KNN \\ & Donahue'14 \cite{donahue2014decaf} & 86.91\% & DNN \\ N- & Chatfield'14 \cite{chatfield2014return} & 88.54\% & CNN \\ Caltech & He'15 \cite{he2015spatial} & 93.42\% & CNN \\ 101 & Orchard'15 \cite{orchard2015hfirst} & 5.4\% & HFirst, SNN \\ & Sironi'18 \cite{sironi2018hats} & 64.2\% & HATS, SNN \\ & Our Model & 91.9\% & SCNN \\ \hline \hline & Huang'11 \cite{huang2011gabor} & 94\% & GF \\ Hand- & Mantecon'16 \cite{mantecon2016hand} & 99\% & SVM \\ Gesture & Our Model & 99.9\% & SCNN \\ \hline \end{tabular}} \end{center} \end{table*} \section{Conclusion and Future Work} \label{sec:conclusion} In this paper, we proposed a spiking learning system utilizing temporal-coded spike neural networks for efficient object recognition from event-based sensors such as LiDAR and DVS. A novel temporal coding scheme was developed, which permits the system to exploit the asynchronously arrived sensor events without delay. Integrating nicely with the asynchronous processing nature of SNN, the system can achieve superior timing efficiency. The performance of the system was evaluated on a list of $7$ LiDAR and DVS datasets. The experiment proved that the proposed method achieved remarkable accuracy on real-world data and significantly reduced recognition delay. This paper demonstrates the potential of SNN in challenging applications involving high speed and dynamics. On the other hand, although we developed the general temporal encoding scheme in Section~\ref{sec:snn_model}, the hyper-parameters of the encoding rules (\ref{eq3.25}) (\ref{eq3.26}) were not optimized in our experiments. But rather, we used some hand-picked parameter values heuristically. Thanks to the cost function (\ref{eq3.50}), the training of SNN led to the minimum spike timing $t_{L,i}$ in the final output layer, which means that we still obtained a self-optimized event accumulation time $t$. It will be an interesting future research topic to optimize directly these hyper-parameters by either including them into the cost function (\ref{eq3.50}) or using the genetic algorithm. This will optimize the event accumulation time $t$ to further enhance recognition accuracy and reduce recognition delay. In addition, we evaluated our learning system for single-object recognition or classification in every single image only. It remains to extend this system to object detection and/or continuous video processing, which will make the study of time efficiency more interesting. Considering the small size of the event-driven datasets used in this paper, only relatively simple SNNs were applied because there would be over-fitting otherwise. This might be one of the reasons for our SNN model's surprisingly better performance than VGG-16 shown in Table \ref{tbl:acc_kitti}. It will be an interesting work to adapt our system to large event-based datasets in the future. \newpage \bibliographystyle{named} \section{Introduction} \label{intro} Object recognition, as a fundamental and key computer vision (CV) technique, has been substantially investigated over the decades. With the help of deep neural networks (DNNs), great success has been achieved regarding recognition accuracy. However, nearly all of the existing solutions work on digital images or videos that are captured by traditional cameras at a fixed rate, commonly 30 or 60 fps. Such traditional frame-based cameras encounter severe challenges in many highly demanding applications that require high speed, high accuracy, or high dynamics, such as autonomous driving, unmanned aerial vehicles (UAV), robotics, gesture recognition, etc. \cite{hwu2018adaptive}. Low frame rate means low temporal resolution and motion blur for high-speed objects. High frame rate leads to a large amount of data with substantial redundancy and a heavy computational burden inappropriate for mobile platforms. To tackle the above challenges, one of the alternative approaches is event-based sensing \cite{gallego2019event}. Typical examples include light detection and ranging (LiDAR) sensor, dynamic vision sensor (DVS), and radio detection and ranging (Radar) sensor. LiDAR is a reliable solution for high-speed and precise object detection in a wide view at long distance and has become essential for autonomous driving. DVS cameras have significant advantages over standard cameras with high dynamic range, less motion blur, and extremely small latency \cite{lichtsteiner2008128}. Traditional cameras capture images or videos in the form of frames. Event-based sensors capture images as asynchronous events. Events are created at different time instances and are recorded or transmitted asynchronously. Each event may have time resolution in the order of microseconds. Because events are sparse, the data amount is kept low even in wide-area 3D spatial sensing or high-speed temporal sensing. Although some research has been made for object recognition and classification based on event sensors, asynchronous object recognition is mostly still an open objective \cite{gallego2019event}. Existing methods usually accumulate all the events within a pre-set collection time duration to construct an image frame for recognition. This synchronous processing approach is straight forward and can employ existing DNN methods conveniently. But it overlooks the temporal asynchronous nature of the events and suffers from recognition delays caused by waiting for events accumulation. Too long an accumulation duration leads to image blurring while too short an accumulation duration loses object details. The fixed accumulation duration is a limiting factor to recognition accuracy in practice. As an example, in \cite{liu2016combined}, the DVS sensor had an event rate from 10K to 300K events per second. For fast object tracking, a short fixed accumulation duration of 20 ms was adopted, which resulted in only $200$ to $6000$ events for each $240\times 180$ image frame. Obviously, this could hardly provide sufficient resolution for the subsequent CNN-based classifier. Novel methods that can recognize objects asynchronously with the accumulation duration optimized according to the nature of object recognition tasks are more desirable. As another example, a typical $5$ Hz LiDAR sensor needs 0.2 seconds to collect all the events for a frame image. During this waiting period, a car travels at $120$ km/hour can run near $7$ meters. For timely object recognition or accident warning, methods that can process events asynchronously without waiting for the accumulation of the final events are more desirable. In this paper, we propose a new spike learning system that uses the spiking neural network (SNN) with a novel temporal coding to deal with specifically the task of asynchronous event-driven object recognition. It can reduce recognition delay and realize much better time efficiency. It can maintain competitive recognition accuracy as existing approaches. Major contributions of this paper are: \begin{itemize} \item We design a new spike learning system that can exploit both the asynchronous arrival time of events and the asynchronous processing capability of neuron networks to reduce delay and optimize timing efficiency. The first ``asynchronous" means that events are processed immediately with a first-come-first-serve mode. The second ``asynchronous" means that the network can output recognition results without waiting for all neurons to finish their work. Integrating them together can significantly enhance time efficiency, computational efficiency, and energy efficiency. \item For the first time, recognition time efficiency is defined and evaluated extensively over a list of event-based datasets as one of the major objectives of object recognition. \item We develop a novel temporal coding scheme that converts each event's asynchronous arrival time and data to SNN spike time. It makes it possible for the learning system to process events immediately without delay and to use the minimum number of events for timely recognition automatically. \item We conduct extensive experiments over a list of $7$ event-based datasets such as KITTI \cite{Geiger2012CVPR} and DVS-CIFAR10 \cite{li2017cifar10}. Experiment results demonstrate that our system had a remarkable time efficiency with competitive recognition accuracy. Over the KITTI dataset, our system reduced recognition delay by $56.3\%$ to $91.7\%$ in various experiment settings. \end{itemize} The rest of the paper is organized as follows. Section \ref{sec:related work} introduces the related work. Section \ref{sec:methods} provides the details of the proposed spiking learning system. Experiment datasets and results are given in Sections \ref{sec:datasets} and \ref{sec:evaluation}, respectively. We conclude the paper in Section \ref{sec:conclusion}. \section{Related Work} \label{sec:related work} LiDAR uses active sensors that emit their own laser pulses for illumination and detects the reflected energy from the objects. Each reflected laser pulse is recorded as an event. From the events, object detection and recognition can be carried out by various methods, either traditional feature extraction methods or deep learning methods. Behley et al. \cite{behley2013laser} proposed a hierarchical segmentation of the laser range data approach to realize object detection. Wang and Posner \cite{wang2015voting} applied a voting scheme to process LiDAR range data and reflectance values to enable 3D object detection. Gonzales et al. \cite{gonzalez2017board} explored the fusion of RGB and LiDAR-based depth maps. Tatoglu and Pochiraju \cite{tatoglu2012point} presented techniques to model the intensity of the laser reflection during LiDAR scanning to determine the diffusion and specular reflection properties of the scanned surface. Hernandez et al. \cite{hernandez2014lane} took advantage of the reflection of the laser beam to identify lane markings on the road surface. Asvadi et al. \cite{asvadi2017depthcn} introduced a convolutional neural network (CNN) to process 3D LiDAR point clouds and predict 2D bounding boxes at the proposal phase. An end-to-end fully convolutional network was used for a 2D point map projected from 3D-LiDAR data in \cite{li2016vehicle}. Kim and Ghosh \cite{kim2016robust} proposed a framework utilizing fast R-CNN to improve the detection of regions of interest and the subsequent identification of LiDAR data. Chen et al. \cite{chen2017multi} presented a top view projection of the LiDAR point clouds data and performed 3D object detection using a CNN-based fusion network. DVS, also called neuromorphic vision sensor or silicon retina, records the changing of pixel intensity at fine time resolution as events. DVS-based object recognition is still at an early stage. Lagorce et al. \cite{lagorce2016hots} utilized the spatial-temporal information from DVS to build features and proposed a hierarchical architecture for recognition. Liu et al. \cite{liu2016combined} combined gray-scale Active Pixel Sensor (APS) images and event frames for object detection. Chen \cite{chen2018pseudo} used APS images on a recurrent rolling CNN to produce pseudo-labels and then used them as targets for DVS data to do supervised learning with the tiny YOLO architecture. Built on the neuromorphic principle, SNN is considered a natural fit for neuromorphic vision sensors and asynchronous event-based sensors. SNN imitates biological neural networks by directly processing spike pulses information with biologically plausible neuronal models \cite{maass1997networks}\cite{ponulak2011introduction}. Regular neural networks process information in a fully synchronized manner, which means every neuron in the network needs to be evaluated. Some SNNs, on the contrary, can work in asynchronous mode, where not all neurons are to be stimulated \cite{susi2018fns}. The attempts of applying SNN for neuromorphic applications include pattern generation and control in neuro-prosthetics systems \cite{ponulak2006resume}, obstacle recognition and avoidance\cite{ge2017spiking}, spatio- and spectro-temporal brain data mapping \cite{kasabov2014neucube}, etc. Attempts were also made to use SNN for object detection and recognition, either over traditional frame-based image data \cite{cannici2019asynchronous,zhang2019tdsnn,lee2016training}, or over event-based LiDAR and DVS data \cite{zhou2020deepscnn}\cite{wu2019direct}. A class of SNNs was developed with temporal coding, where spiking time instead of spiking rate or spiking count or spiking probability was used to encoding neuron information. The SpikeProp algorithm \cite{bohte2002error} described the cost function in terms of the difference between desired and actual spike times. It is limited to learning a single spike. Supervised Hebbian learning \cite{legenstein2005can} and ReSuMe \cite{ponulak2010supervised} were primarily suitable for the training of single-layer networks only. As far as we know, all the existing SNN works over event-based sensor data need a pre-set time duration to accumulate events into frame-based images before recognition. How to break this limitation to develop SNNs with the full asynchronous operation is still an open problem. \section{A Spike Learning System} \label{sec:methods} \begin{figure}[t] \centering \includegraphics[width=1\linewidth]{fig/systemstages.PNG} \caption{Flow diagram of the proposed spike learning system.} \label{fig:flowdiagram} \end{figure} Fig.~\ref{fig:flowdiagram} shows the workflow of our proposed spike learning system. The pipeline consists of three major blocks: 1) Pre-processing of asynchronous events from event-based sensors; 2) Temporal coding of the pre-processed events into SNN input spike time; 3) Object recognition with SNN. \subsection{Pre-processing of Events} The event data in standard LiDAR datasets are usually given as tuple $(x, y, z, r)$, where $(x, y, z)$ is the location of the object, and $r$ is the received light intensity. The events form a point cloud at a certain time-stamp. Existing LiDAR datasets usually contain this time-stamp only instead of event timing. For real-time applications, events may be collected asynchronously. Each event comes with its own arrival time $t_a$, which is the summation of laser pulse receiving time, LiDAR signal processing time, and data transmission time from the sensor to the learning system. With voxelization or other similar techniques \cite{zhou2020deepscnn}, we can compress the events data by quantizing the large spatial region into a small and fixed 3D integer grid $(x_v, y_v, z_v)$. For example, many papers quantize the KITTI dataset point cloud into a $768\times 1024 \times 21$ grid. Let the spatial quantization step sizes in the three dimensions be $\Delta x$, $\Delta y$, and $\Delta z$, respectively. Then the event $(x, y, z, r)$ falls into the voxel \begin{eqnarray} & {\cal V}(x_v, y_v, z_v) = \{(x, y, z): x_v \Delta_x \leq x < (x_v+1)\Delta_x, \nonumber \\ & y_v \Delta_y \leq y < (y_v+1)\Delta_y, z_v \Delta_z \leq z < (z_v+1)\Delta_z\}. \end{eqnarray} A voxel may have multiple or zero events due to events sparsity. Its value can be set as the number of falling events, light intensity $r$, object distance $\sqrt{x^2+y^2+z^2}$, or laser light flying time $2\sqrt{x^2+y^2+z^2}/c$ with light speed $c$ \cite{zhou2020deepscnn}. In our experiments, we set the voxel value as \begin{equation} D(x_v, y_v, z_v) = \left\{ \begin{array}{ll} \frac{2\sqrt{x^2+y^2+z^2}}{c}, & (x, y, z) \in {\cal V}(x_v, y_v, z_v) \\ 0, & {\rm otherwise} \end{array} \right. \end{equation} We use the first arriving event $(x, y, z, r)$ inside this voxel to calculate $D(x_v, y_v, z_v)$. If no events falling inside this voxel, then $D(x_v, y_v, z_v)=0$. For DVS cameras, each event is recorded as $(x, y, t, p)$, where $(x, y)$ is the pixel coordinate in 2D space, $t$ is the time-stamp or arrival time of the event, and $p$ is the polarity indicating the brightness change over the previous time-stamp. The polarity is usually set as $p(x, y, t) = \pm 1$ or $p(x, y, t) = \{0, 1\}$ \cite{chen2019multi}. Pixels without significant intensity change will not output events. DVS sensors are noisy because of coarse quantization, inherent shot noise in photos, transistor circuit noise, arrival timing jitter, etc. By accumulating the event stream over an exposure time duration, we can obtain an image frame. Specifically, accumulating events over exposure time from $t_0$ to $t_K$ gives the image \begin{equation} D(x_v, y_v) = \sum_{t=t_0}^{t_K} p(x_v, y_v, t) + I(x_v, y_v), \label{eq3.5} \end{equation} where $(x_v, y_v)$ is the pixel location, and $I(x_v, y_v)$ is the initial image at time $t_0$. We can set $I(x_v, y_v)=0$ from the start. Obviously, longer exposure duration $t_K-t_0$ leads to better image quality for slow-moving objects but blurring for fast-moving objects. Most existing methods pre-set an exposure duration, such as $100$ milliseconds for DVS-CIFAR10, to construct the image $D(x_v, y_v)$ for the subsequent recognition. In contrast, our proposed system does not have such a hard exposure time limitation and can automatically give recognition outputs within the best exposure time duration for the tasks. \subsection{Temporal Coding for Spiking Neural Networks} \label{sec:snn_model} In SNNs, neurons communicate with spikes or action potentials through layers of the network. When a neuron's membrane potential reaches its firing threshold, the neuron will emit a spike and transmit it to other connected neurons \cite{ponulak2011introduction}. We adopt the spike-time-based spiking neuron model of \cite{mostafa2018supervised}. Specifically, we use the non-leaky integrate-and-fire (n-LIF) neuron with exponentially decaying synaptic current kernels. The membrane potential is described by \begin{equation} \frac{dv_{j}(t)}{dt} = \sum_{i} w_{ji} \kappa(t-t_{i}), \label{eq3.10} \end{equation} where $v_{j}(t)$ is the membrane potential of neuron $j$, $w_{ji}$ is the weight of the synaptic connection from the input neuron $i$ to the neuron $j$, $t_i$ is the spiking time of the neuron $i$, and $\kappa(t)$ is the synaptic current kernel function. The value of neuron $i$ is encoded in the spike time $t_i$. The synaptic current kernel function determines how the spike stimulation decays over time. We use exponential decaying as given below \begin{equation} \kappa(t)=u(t)e^{-\frac{t}{\tau}}, \end{equation} where $\tau$ is the decaying time constant, and $u(t)$ is the unit step function defined as \begin{equation} u(t) = \left\{ \begin{array}{ll} 1, \;\;\; & \text{if $t\geq0$}\\ 0, \;\;\; & \text{otherwise} \end{array} \right. . \end{equation} Fig. \ref{fig:vmem} illustrates how this neuron model works. A neuron is only allowed to spike once unless the network is reset or a new input pattern is presented. \begin{figure}[t] \centering \includegraphics[width=0.8\linewidth]{fig/spiking_mech.pdf} \caption{The working principle of the n-LIF neuron model. (a) Four input neurons spike at time $t_i$, $i=1, \cdots, 4$. (b) Synaptic current $\kappa (t-t_i)$ jumps and decays over time. (c) Membrane voltage potential $v_j(t)$ rises towards the firing threshold. (d) The neuron $j$ emits a spike at time $t_j=t_{out}$ when the threshold is crossed.} \label{fig:vmem} \end{figure} An analog circuit to implement this neuron was designed by \cite{zhou2020deepscnn} and was shown to be highly energy efficient. For training or digital (software) implementations, however, we do not need to emulate the operation (\ref{eq3.10}). Instead, we skip the dynamic time-evolution and consider only the member voltage at spiking time $t_{j}$. For this purpose, solving (\ref{eq3.10}) we get \begin{equation} v_j(t_j) = \sum_{i\in C} w_{ji} {\tau} \left( 1- e^{-\frac{t_{j}-t_i}{\tau}} \right), \end{equation} where the set $C=\{i: t_i < t_{j}\}$ includes all (and only those) input neurons that spike before $t_{j}$. Larger $\tau$ leads to lower $v_j(t_j)$. For any $\tau$, we can find an appropriate voltage threshold $v_j(t_j)$ so that the activate input neuron set $C$ and the output spike time $t_j$ do not change. Therefore, in digital implementation, we can simply set both the voltage threshold and $\tau$ to 1. With $v_j(t_j)=1$, the neuron $j$'s spike time satisfies \begin{equation} e^{t_{j}} = \sum_{i\in C} e^{t_{i}} \frac{w_{ji}} {\sum_{\ell \in C} w_{j\ell}-1}. \label{eq3.20} \end{equation} In software SNN implementation, we can use directly $e^{t_i}$ as neuron value, calculate $w_{ji}/({\sum_{\ell \in C} w_{j\ell}-1})$ as weights, and (\ref{eq3.20}) is then the input-output equation of a feed-forward fully connected neural network layer. We do not need other nonlinear activations because the weights are themselves nonlinear. At the first (or input) layer, we need to encode the pre-processed event data $D(x_v, y_v, z_v)$ into spike time $t_i$. Existing methods such as \cite{zhou2020deepscnn} simply let $t_i = D(x_v, y_v, z_v)$, which when applied onto the event-driven data will ignore the inherent temporal information and the asynchronous property of the events. To fully exploit the asynchronous events property, we propose the following new temporal coding scheme that encodes both event value $D(x_v, y_v, z_v)$ and the arrival time $t_a$ of each event. Consider a LiDAR event $(x, y, z, r)$ arriving at time $t_a$. During preprocessing, assume it is used to update the voxel value $D(x_v, y_v, z_v)$. Also assume that this voxel $(x_v, y_v, z_v)$ corresponds to the $i$th input neuron in the SNN input layer. This neuron's spiking time is then set as \begin{equation} t_i = \max\{\beta, t_a\} + \alpha D(x_v, y_v, z_v). \label{eq3.25} \end{equation} where $\beta$ is a time parameter used to adjust delayed processing of early arrival events, and $\alpha$ is a constant to balance the value of $D(x_v, y_v, z_v)$ and $t_a$ so that the two terms in $t_i$ are not orders-of-magnitude different. The first term in the right of (\ref{eq3.25}) encodes the event arrival time. If $\beta = 0$, then there is no delay in encoding the arrival time: all the events are processed immediately upon arrival. If $\beta$ is set to the image frame time, we have the conventional synchronous event processing scheme where object recognition does not start until all the events are accumulated together. We can use $\beta$ to control the exposure time $t_K$. The second term encodes the event value. If $\alpha$ is set as $0$, then only the event arrival time is encoded. In this case, $\beta$ should be set as a small value, so that the temporal information could be fully exploited (e.g. if $\beta = 0$, then $t_i = t_a$). As a matter of fact, (\ref{eq3.25}) is a general temporal encoding framework. Various encoding strategies can be realized with appropriate parameters $\alpha$ and $\beta$, such as encoding event value only, encoding event arrival time only, and encoding both event value and arrival time. For DVS sensors, assume similarly that the pixel $(x_v, y_v)$ is the $i$th input neuron. During the exposure time, while events are being accumulated into $D(x_v, y_v)$ according to (\ref{eq3.5}), the spiking time is set as the smallest $t_i$ that satisfies \begin{equation} \sum_{t=t_0}^{t_i} p(x_v, y_v, t) + I(x_v, y_v) \geq \Gamma(t_i). \label{eq3.26} \end{equation} $\Gamma(t)$ is a threshold function that can be set as a constant $\alpha$ or a linear decreasing function \begin{equation} \Gamma(t) = \beta(t_K-t), \end{equation} with rate $\beta$. If $\beta=0$, then the neuron spikes immediately when the pixel value is positive. A sufficiently large $\beta$ effectively makes us wait and accumulate all the events to form a frame image before SNN processing. In this case, we fall back to the traditional synchronous operation mode. If a pixel's intensity accumulates faster, then it spikes earlier. If the pixel accumulates slower or even stays near 0, then it may never spike. With the proposed temporal coding scheme, the system would be able to output a recognition decision asynchronously after some accumulation time $t$ between $t_0$ and $t_K$, and the event accumulation operation stops at $t$. Only the events during $t_0$ and $t$ are used for inference. Therefore, $t_0$ and $t_K$ can be simply set as the start and end time of the recognition task, such as an image frame time. This avoids the headache of looking for the best pre-set accumulation time. Note that no pre-set accumulation time can be optimal for all images. Some images need longer accumulation, while some other images need short accumulation. The proposed temporal coding enables our system to resolve this challenge in a unique way: We just pre-set a very large accumulation time $t_K$, and the system can automatically find the optimal accumulation time $t$ used for each image. In other words, instead of the fixed pre-set accumulation time, a unique event accumulation time $t$ (well before $t_K$) can be found for each image automatically by the system. \subsection{Object recognition with SNN} Many other SNNs use more complex neuron models or use spike counts or rates as neuron values. In comparison, our neuron model is relatively simple and has only a single spike, which makes our SNN easier to train and more energy-efficient when implemented in hardware. Based on the neuron input/output expression (\ref{eq3.20}), the gradient calculation and gradient-descent-based training become nothing different from conventional DNNs. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{fig/snn.pdf} \caption{SNN with spiking fully-connected (FC) layers for object recognition.} \label{fig:implementation} \end{figure} \begin{figure}[t] \centering \includegraphics[width=1\linewidth]{fig/netarchitecture.PNG} \caption{SNN with multiple SCNN layers and FC layers for object recognition.} \label{fig:scnn} \end{figure} Based on (\ref{eq3.20}), we can implement both spiking fully-connected (FC) layers and spiking convolutional neural network (SCNN) layers just as conventional DNN FC and CNN layers. Fig. \ref{fig:implementation} shows an SNN with two FC layers: one input layer with our temporal coding, one hidden FC layer, and one output FC layer. SCNN layers work in a similar way as traditional CNN but are equipped with spiking kernels. For more complex tasks, we can apply multiple SCNN layers and FC layers, as shown in Fig. \ref{fig:scnn}. Pooling layers such as max-pooling and average-pooling can also be used, which is the same as conventional DNNs. The standard back-propagation technique can be used to train the weights $w_{ji}$. For real-time object recognition, the SNN spiking time range is on the same scale as the event arrival time, specifically, starting at $t_0$ and ending at $t_K$. The SNN takes in spikes sequentially according to their arrival time. Each neuron keeps accumulating the weighted value and comparing it with the threshold until the accumulation of a set of spikes can fire the neuron. Once the neuron spikes, it would not process any further input spikes unless reset or presented with a new input pattern. The recognition result is made at the time of the first spike among output neurons. Smaller $t_j$ or $e^{t_j}$ means stronger classification output. Also, smaller $t_j$ as output means that inference delay can be reduced, which is an outcome of the asynchronous working principle of SNN. Define the input of SNN as ${\bf z}_0$ with elements $z_{0,i} = e^{t_i}$, and the output of SNN as ${\bf z}_{L}$ with elements $z_{L, i}=e^{t_{L, i}}$. Then we have ${\bf z}_{L} = f({\bf z}_0; {\bf w})$ with nonlinear mapping $f$ and trainable weight ${\bf w}$ which includes all SNN weights $w_{ji}$ and the temporal coding parameter $\beta$. Let the targeting output be class $c$, then we train the network with the loss function \begin{equation} {\cal L} ({\bf z}_L, c) = -\ln \frac{z_{L, c}^{-1}}{\sum_{i\neq c} z_{L, i}^{-1}} + k \sum_j \max\left\{0, 1-\sum_i w_{ji}\right\}, \label{eq3.50} \end{equation} where the first term is to make $z_{L, c}$ the smallest (equivalently $t_{L, c}$ the smallest) one, while the second term is to make sure that the sum of input weights of each neuron be larger than $1$. The parameter $k$ adjusts the weighting between these two terms. The training of (\ref{eq3.50}) can be conducted with the standard backpropagation algorithm similar to conventional DNNs, just as \cite{mostafa2018supervised}. Nevertheless, a problem of \cite{mostafa2018supervised} is that its presented algorithm did not require $t_j> t_i$ for $i \in {\cal C}$. This led to $t_j \leq t_i$ or even negative $t_j$ that was not practical. We corrected this problem and implemented the training of (\ref{eq3.50}) in the standard deep learning platform Tensorflow. \section{Evaluation Datasets} \label{sec:datasets} To investigate the effectiveness of our proposed system, we evaluated it on a list of $7$ LiDAR and DVS datasets introduced below. Their sample images are shown in Fig. \ref{fig:imagesamples}. \begin{figure}[t] \centering \frame{\includegraphics[width=0.9\linewidth]{fig/datasets_new.pdf}} \caption{Sample images of LiDAR and DVS datasets used in this paper.} \label{fig:imagesamples} \end{figure} \subsection{LiDAR Datasets} \paragraph{\bf KITTI Dataset} In order to evaluate the ability of our proposed system on complex real-life data in the autonomous driving scenario, we trained and tested it on the KITTI dataset \cite{Geiger2012CVPR}. We utilized the KITTI 3D object detection benchmark, specifically the point clouds data collected by Velodyne HDL-64E rotating 3D laser scanner, which provided 7481 labeled samples. However, the provided point clouds data can not be directly used because all label annotations (location, dimensions, observation angle, etc.) are provided in camera coordinates instead of Velodyne coordinates. To convert point clouds data onto Velodyne coordinates, we first mapped point clouds data $(x, y, z)$ to an expanded front view $(x_{\rm front}, y_{\rm front})$, whose size was determined by the resolution of the LiDAR sensor. We used the transformation \begin{equation} x_{\rm front} = \left \lfloor -\frac{\arctan\frac{y}{x}} {R_{h}} \right \rfloor, y_{\rm front} = \left \lfloor -\frac{\arctan\frac{z}{\sqrt{x^2+y^2}}} {R_{v}} \right \rfloor \end{equation} where $R_{h}$ and $R_{v}$ are the horizontal and vertical angular resolution in radians, respectively. In order to project the label annotations onto the front view plane, we first calculated the bounding box in camera coordinates and transferred the corners to Velodyne coordinates by multiplying the transition matrix $T_{c2v}$. The object location was mapped onto the front view similarly, as illustrated in Fig. \ref{fig:kitti_process}. Based on the front view locations, objects were cropped with a fixed size to establish the recognition dataset. \begin{figure}[tbp!] \centering \includegraphics[width=0.6\linewidth]{fig/kitti_process.pdf} \caption{Transformation of KITTI 3D Point Clouds into 2D LiDAR front view images.} \label{fig:kitti_process} \end{figure} The changing from 3D to 2D would reduce the computational complexity of recognition. We also artificially generated arrival time $t_a$ for each event $(x, y, z)$ linearly with respect to $x$. The processed KITTI dataset contains 32456 training samples and 8000 testing samples covering 8 classes of KITTI objects. \paragraph{\bf N-Sydney} The N-Sydney Urban Objects dataset \cite{chen2014performance} is an event-based LiDAR dataset containing 26 object classes. We considered only the following 9 classes: Van, Tree, Building, Car, Truck, 4wd, Bus, Traffic light, and Pillar. We artificially generated arrival time $t_a$ for each event. \subsection{DVS Datasets} \paragraph{\bf DVS-CIFAR10} The DVS-CIFAR10 dataset \cite{li2017cifar10} is converted from the popular CIFAR10 data set. It has 10000 samples covering 10 classes. We split the dataset into a training set of $8000$ samples and a testing set of $2000$ samples while adopting the full $128 \times 128$ pixel scene. \paragraph{\bf DVS-barrel} The DVS-barrel dataset has 6753 samples with 36 classes \cite{orchard2015hfirst}, which we split into a training set of 3453 samples and a test set of 3000. We used the ``ExtractedStabilized'' version of the dataset, which, rather than using the full $128 \times 128$ pixel scene, extracts individual characters into $32 \times 32$ pixel scenes. \paragraph{\bf N-MNIST, N-Caltech101, Hand-Gesture} The N-MNIST \cite{orchard2015hfirst} and N-Caltech101 \cite{zhang2006svm} datasets are the conversion of two popular image datasets MNIST and Caltech101. The N-MNIST has $10$ object classes and image size $28 \times 28$. Caltech101 has 100 object classes plus a background class. The image size on average is $200 \times 300$. The Hand-Gesture dataset \cite{huang2011gabor} is a DVS dataset with an image size of $120\times 320$. \section{Evaluation} \label{sec:evaluation} \subsection{Experiment Setup} Table \ref{tbl:snn_config} lists the network configurations we designed based on the proposed spiking learning system architecture for the datasets listed in Fig. \ref{fig:imagesamples}. \begin{table}[t] \centering \caption{Our network models. Sample notation explained: F256 (fully connected layer with 256 spiking neurons), C5-48 (convolutional layer with 48 spiking kernels of size $5 \times 5$), AP (average-pooling layer with stride 2).} \label{tbl:snn_config} \scalebox{0.9}{ \begin{tabular}{r|l} \hline KITTI & (50$\times$118$\times$1): C5-48, C5-24, F256, F8 \\ N-Sydney & (32$\times$32$\times$32): C5-32, C3-32, F128, F9 \\ \hline \hline DVS-barrel & (input 1024): F2000, F36 \\ N-MNIST & (28$\times$28$\times$1): C5-32, C5-16, F10 \\ N-Caltech101 & (200$\times$300): C5-16, C3-8, F64, F101 \\ HandGesture & (120$\times$320): C5-32,C3-48,C3-16,F64,F10 \\ \hline & Small: C3-32, AP, C3-48, AP, F256, F10 \\ DVS- & Medium: C3-32, C3-48, AP, C3-64, AP, \\ CIFAR10 & \hspace{1.5cm} F256, F10 \\ (128$\times$128) & Large: C3-32, C3-64, AP, C3-128, \\ & \hspace{1cm} C3-256, AP, F1024, F10 \\ \hline \end{tabular}} \end{table} All datasets were tested over models with multiple SCNN and FC layers because our experiments showed that they were much better than simpler SNNs with FC layers only. As for the KITTI dataset, a model with two SCNN layers and two FC layers was employed. The input size was $50\times118\times1$. The kernel size for the SCNN layers was $5\times5$, with a stride size of 2. The numbers of kernels were 48 and 24, respectively. The output from the second SCNN layer had a size $13\times30\times24$ and was flattened and passed to the first FC layer (with 256 spiking neurons). The second FC layer had 8 output channels. The batch size was set to 10, and the initial learning rate was 1e-3 with decay. Adam optimizer was adopted for training. The N-Sydney Urban Object dataset and the N-Caltech 101 dataset were tested over similar models with two SCNN layers and two FC layers. The Hand-Gesture dataset was tested over a model with three SCNN layers and two FC layers. The DVS-CIFAR10 dataset was considered the most challenging one among these datasets. It is also much more challenging than the conventional frame-based CIFAR10 due to noisy samples and a single intensity channel. We created three different spiking network structures for a fair comparison with \cite{wu2019direct}, which also created three SNN structures to compare with other SNN and DNN results and published the best accuracy so far. The training employed Adam as the optimizer with a batch size of 8, and 100 training epochs with an exponentially decaying learning rate. By manipulating learning rates, the fastest convergence was obtained when the learning rate started at 1e-2 in epoch 1 and ended at 1e-5 in epoch 100. Note that we tried various optimizers such as SGD and Adam on each model during training. Their training performance did not show a significant difference. \subsection{Experiment Results} We used recognition accuracy ($A$) and recognition (inference) delay ($D$) as performance metrics to evaluate and compare our models with the state of the arts. Based on $A$ and $D$ we calculated performance gain ($G$) of our model as \begin{equation} G_{\rm acc} = \frac{A_{\rm ours}-A_{\rm ref}}{A_{\rm ref}}, \;\;\; G_{\rm time} = \frac{D_{\rm ref} - D_{\rm ours}}{D_{\rm ref}}, \end{equation} where $G_{\rm acc}$ and $G_{\rm time}$ are the accuracy gain and time efficiency gain of our model over some reference model, respectively. The time efficiency gain is also the ratio of delay/latency reduction. The delay includes both the delay of the inference algorithm, which we call ``inference delay", and the delay caused by waiting for the asynchronous arrival of events. Their sum is the ``total delay". Although we do not directly evaluate computational complex and energy efficiency, we must point out that SNN-based models, in general, have lower computational complexity and higher energy efficiency, as pointed out in \cite{zhou2020deepscnn} and many other SNN publications. \paragraph{\bf KITTI} We take the transformed KITTI dataset as a representative of LiDAR datasets to interpret the evaluation results in detail. The results of other datasets are provided at the end of this section. For the KITTI dataset, we compared our proposed system against the VGG-16 model of conventional CNN. To make the processed KITTI dataset work on VGG-16, we replicated the single intensity channel into three RGB color channels and resized the image from $50\times118$ to $128 \times 128$. We utilized the VGG-16 pre-trained on ImageNet for transfer learning. Table \ref{tbl:acc_kitti} shows that our system not only achieved better accuracy (with a gain of $G_{\rm acc}=5.46\%$ over VGG-16), but also had much smaller latency or higher time efficiency (with a gain of $G_{\rm time}$ between $56.3\%$ and $91.7\%$ over VGG-16). The reason for VGG-16 to have relatively lower testing accuracy might be because the processed KITTI dataset had a single intensity channel and had a smaller image size than the ideal VGG-16 inputs. A smaller network quite often can achieve better accuracy performance than a complex network over a relatively small dataset. \begin{table}[t] \caption{Comparison of our SNN model with the VGG-16 model over the KITTI dataset for accuracy and timing efficiency.} \label{tbl:acc_kitti} \begin{center} \scalebox{0.9}{ \begin{tabular}{cc c c} \hline Model & VGG-16 & Our Model & Gain \\ \hline Accuracy & 91.6\% & 96.6\% & 5.46\% \\ \hline Inf. Delay (CPU) & 38 ms & 8.5 ms & 77.6\% \\ Total Delay (CPU) & 63 ms & 27 ms & 57.1\% \\ \hline Inf. Delay (GPU) & 23 ms & 1.9 ms & 91.7\% \\ Total Delay (GPU) & 48 ms & 21 ms & 56.3\% \\ \hline \end{tabular}} \end{center} \end{table} Next, let us focus on the delay and time efficiency comparison results. To obtain an inference delay, we set temporal coding parameter $\beta$ for our system to work in synchronous mode, which gave us the SNN's average running time for an image. We did this over both Intel Core i9 CPU and NVIDIA GeForce RTX 2080Ti GPU. On GPU, our model spent 1.9 ms (millisecond) while VGG-16 needed 23 ms. Our model ran faster because our model both was much simpler and ran asynchronously (inference may terminate at a very small short spiking time $t_{out}$ at the last layer). To obtain the total delay, we set a temporal coding parameter to work in the asynchronous mode to exploit both the asynchronous event arrival property and the SNN's asynchronous processing property. By contrast, VGG-16 had to wait until all the events were received before starting processing. Since KITTI LiDAR generated $360^\circ$ image frames at the speed of $10$ Hz, the collection of all events for a $90^\circ$ field-of-view image had a delay of $25$ ms. So the total delay on GPU was $25+23=48$ ms. In contrast, our model had a total delay of $21$ ms on average, a delay reduction (and time efficiency gain) of $56.3\%$. We can also see that the asynchronous events arrival time dominated the total delay. Our model just needed a fraction of events for inference, so it had a much smaller delay. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{fig/effi_lidar_new.png} \caption{Distributions of (a) inference time and (b) ratio of events used in inference, on LiDAR datasets.} \label{fig:effi_lidar} \end{figure} \begin{figure}[t] \centering \includegraphics[width=\linewidth]{fig/effi_dvs_new.png} \caption{Distributions of (a) inference time and (b) ratio of events used in inference, on DVS datasets.} \label{fig:effi_dvs} \end{figure} The distribution of the ratio of events used by our model on the KITTI dataset is shown in Fig. \ref{fig:effi_lidar}(b). On average, $76\%$ of events were used in inference. These figures also demonstrate that our system worked asynchronously and selected automatically various numbers of events in different recognition tasks. In addition, we calculated the ``ideal inference delay", which is defined as the difference between the SNN output spike time and the first SNN input spike time. Obviously, ``ideal" means that we skip hardware processing delay. The distribution is shown in Fig. \ref{fig:effi_lidar}(a). An interesting observation was that the ``ideal inference time" was approximately $1.9$ ms, the same as what we obtained from practical GPU running time in Table \ref{tbl:acc_kitti}. This may not be surprising because the SNN spiking time duration is usually much longer than GPU executing duration for simple networks. Asynchronous SNN operation can surely reduce recognition/inference delay. Based on the above interesting observation, we propose event-ratio as an approximation of time efficiency gain. The event-ratio is defined as the proportion of contributing events (input events consumed before the decision) for recognizing the object in an image frame to all the events corresponding to this image frame, i.e., \begin{equation} r_{\rm event} = \frac{N_{\rm contributing}}{N_{\rm all}}. \end{equation} The estimated time efficiency gain is defined as \begin{equation} \hat{G}_{\rm time} \approx 1 - r_{\rm event}. \label{eq4.1} \end{equation} The estimation is accurate when the computation delay of CPU and GPU is negligible compared with the event accumulation time duration, which is often true. This way of calculating the time efficiency gain resolves a big hurdle in DVS datasets because DVS datasets usually do not have realistic events timing. Even for LiDAR datasets, event timing could only be constructed artificially. The time efficiency gain (\ref{eq4.1}) catches the key factor, i.e., asynchronous operation, and skips the variations caused by non-ideal software coding and platforms. Adopting this approximation, the accuracy and time efficiency of our models on all the datasets are listed in Table \ref{tbl:acc_delay}. Gain $G_{\rm acc}$ was calculated by choosing the accuracy of the best model used in our comparison as $G_{\rm ref}$, see Table \ref{tbl:comp_cifar10} and Table \ref{tbl:acc_sydney}. From the table, we can see that our system, in general, was competitive with the state-of-the-art models in recognition accuracy. More importantly, our system needed only a fraction of the events, which lead to a $34\%$ to $75\%$ gain in time efficiency. \begin{table}[t] \caption{Summary of accuracy and time efficiency of our system.} \label{tbl:acc_delay} \begin{center} \scalebox{1}{ \begin{tabular}{c | c c c c} \hline Dataset & Accuracy & $G_{\rm acc}$ & Event & $\hat{G}_{\rm time}$ \\ & & & Ratio & \\ \hline KITTI & 96.62\% & 5.46\% & 0.76 & 24\% \\ N-Sydney & 78.00\% & 6.85\% & 0.62 & 38\% \\ \hline DVS-CIFAR10 & 69.87\% & 8.71\% & 0.38 & 62\% \\ DVS-Barrel & 99.52\% & 4.32\% & 0.25 & 75\% \\ N-MNIST & 99.19\% & -0.0\% & 0.37 & 63\% \\ N-Caltech101 & 91.89\% & -1.6\% & 0.45 & 55\% \\ Hand-Gesture & 99.99\% & 0.91\% & 0.41 & 59\% \\ \hline \end{tabular}} \end{center} \end{table} \paragraph{\bf DVS-CIFAR10} We take DVS-CIFAR10 as an example to detail the evaluations on DVS datasets. The results of all other DVS datasets are given at the end of this section. Table \ref{tbl:comp_cifar10} shows that our model had 69.87\%\ recognition accuracy, higher than competitive models listed. Note that the competitive models were selected carefully according to their importance for the development of this dataset and their state-of-art performance. Their accuracy values were cited directly from the papers. Their methods were also listed to show clearly the performance difference of conventional machine learning methods, DNN/CNN, and SNN/SCNN. The reduction of the delay was even more striking based on the description of Table \ref{tbl:acc_delay}. From Fig. \ref{fig:effi_dvs}(b), we can see that our model used only $38\%$ of events for inference, which means a delay reduction of over $62\%$. From Fig. \ref{fig:effi_dvs}(a), our model on average used $2.35$ ms for inference based on our artificially created timing information of the events similar to KITTI. \begin{table}[t] \centering \caption{Comparison with existing results on DVS-CIFAR10} \label{tbl:comp_cifar10} \scalebox{0.9}{ \begin{tabular}{cccc} \hline Model & Method & Accuracy \\ \hline Zhao 2014 \cite{zhao2014feedforward} & SNN & 22.1\% \\ Lagorce 2016 \cite{lagorce2016hots} & HOTS & 27.1\% \\ Sironi 2018 \cite{sironi2018hats} & HAT & 52.4\% \\ Wu 2019 \cite{wu2019direct} & SNN & 60.5\% \\ Our model (sml) & SCNN & 60.8\% \\ Our model (mid) & SCNN & 64.3\% \\ Our model (large) & SCNN & 69.9\% \\ \hline \end{tabular}} \end{table} The effect of different network configurations on the training process was investigated and depicted in Fig. \ref{fig:loss_trend}, where the training loss was calculated with 8 images randomly sampled from the training set. When being trained on the DVS-CIFAR10 dataset, the larger model converged faster than the smaller one, which might be because of the better capability of larger SNN in learning data representations. \begin{figure}[t] \centering \includegraphics[width=0.9\linewidth]{fig/loss_on_cifar_crop.pdf} \caption{Training convergence of the proposed models over the DVS-CIFAR10 dataset.} \label{fig:loss_trend} \end{figure} \paragraph{\bf Other datasets} For the rest of the datasets, N-Sydney, DVS-barrel, N-MNIST, N-Caltech101, and Hand-Gesture, the accuracy comparisons with the state-of-the-art models are listed in Table \ref{tbl:acc_sydney}. The table also listed the major network architecture. We can see that our models provided competitive recognition accuracy. For challenging datasets with relatively lower recognition accuracy, such as N-Sydney, our model had much higher performance. For the popular N-MNIST dataset, our model had relatively low computational complexity because the number of trainable weights (2.2e4) was lower than other models. The event-ratio/inference-time distributions of our SNN model on LiDAR and DVS datasets are shown respectively in Fig. \ref{fig:effi_lidar} and Fig. \ref{fig:effi_dvs}. On average, $62\%$ of events were needed for the N-Sydney dataset, while the DVS datasets mostly took less than $50\%$ of events to accomplish the inference. It can also be seen that the inference time for most samples was in the range of $1$ to $3$ ms, demonstrating a strong performance in time efficiency. \begin{table*}[t] \caption{Performance comparison over N-Sydney, N-MNIST, N-Caltech101 and Hand-Gesture datasets.} \label{tbl:acc_sydney} \begin{center} \scalebox{1}{ \begin{tabular}{c | c c c} \hline Dataset & Model & Accuracy & Method \\ \hline \hline N-& Chen'14 \cite{chen2014performance} & 71\% & GFH+SVM \\ Sydney & Maturana'15 \cite{maturana2015voxnet} & 73\% & CNN \\ & Our Model & 78\% & SCNN \\ \hline \hline & Perez'13 \cite{perez2013mapping} & 95.2\% & CNN \\ DVS- & Perez'13 \cite{perez2013mapping} & 91.6\% & SCNN \\ Barrel & Orchard'15 \cite{orchard2015hfirst} & 84.9\% & HFirst \\ & Our Model & 99.5\% & SCNN \\ \hline \hline & Orchard'15 \cite{orchard2015hfirst} & 71.20\% & HFisrt, 1.2e5 \\ & Neil'16 \cite{neil2016phased} & 97.30\% & LSTM, 4.5e4 \\ N- & Lee'16 \cite{lee2016training} & 98.70\% & SNN, 1.8e6 \\ MNIST & Shreshtha'18 \cite{shrestha2018slayer} & 99.20\% & SNN, 7.2e4 \\ & Wu'18 \cite{wu2018spatio} & 98.78\% & SNN, 1.8e6 \\ & Our Model & 99.15\% & SCNN, 2.2e4 \\ \hline \hline & Zhang'06 \cite{zhang2006svm} & 66.23\% & SVM-KNN \\ & Donahue'14 \cite{donahue2014decaf} & 86.91\% & DNN \\ N- & Chatfield'14 \cite{chatfield2014return} & 88.54\% & CNN \\ Caltech & He'15 \cite{he2015spatial} & 93.42\% & CNN \\ 101 & Orchard'15 \cite{orchard2015hfirst} & 5.4\% & HFirst, SNN \\ & Sironi'18 \cite{sironi2018hats} & 64.2\% & HATS, SNN \\ & Our Model & 91.9\% & SCNN \\ \hline \hline & Huang'11 \cite{huang2011gabor} & 94\% & GF \\ Hand- & Mantecon'16 \cite{mantecon2016hand} & 99\% & SVM \\ Gesture & Our Model & 99.9\% & SCNN \\ \hline \end{tabular}} \end{center} \end{table*} \section{Conclusion and Future Work} \label{sec:conclusion} In this paper, we proposed a spiking learning system utilizing temporal-coded spike neural networks for efficient object recognition from event-based sensors such as LiDAR and DVS. A novel temporal coding scheme was developed, which permits the system to exploit the asynchronously arrived sensor events without delay. Integrating nicely with the asynchronous processing nature of SNN, the system can achieve superior timing efficiency. The performance of the system was evaluated on a list of $7$ LiDAR and DVS datasets. The experiment proved that the proposed method achieved remarkable accuracy on real-world data and significantly reduced recognition delay. This paper demonstrates the potential of SNN in challenging applications involving high speed and dynamics. On the other hand, although we developed the general temporal encoding scheme in Section~\ref{sec:snn_model}, the hyper-parameters of the encoding rules (\ref{eq3.25}) (\ref{eq3.26}) were not optimized in our experiments. But rather, we used some hand-picked parameter values heuristically. Thanks to the cost function (\ref{eq3.50}), the training of SNN led to the minimum spike timing $t_{L,i}$ in the final output layer, which means that we still obtained a self-optimized event accumulation time $t$. It will be an interesting future research topic to optimize directly these hyper-parameters by either including them into the cost function (\ref{eq3.50}) or using the genetic algorithm. This will optimize the event accumulation time $t$ to further enhance recognition accuracy and reduce recognition delay. In addition, we evaluated our learning system for single-object recognition or classification in every single image only. It remains to extend this system to object detection and/or continuous video processing, which will make the study of time efficiency more interesting. Considering the small size of the event-driven datasets used in this paper, only relatively simple SNNs were applied because there would be over-fitting otherwise. This might be one of the reasons for our SNN model's surprisingly better performance than VGG-16 shown in Table \ref{tbl:acc_kitti}. It will be an interesting work to adapt our system to large event-based datasets in the future. \newpage \bibliographystyle{named}
train/arxiv
BkiUdYU5qWTA_aC0QGRd
5
1
\section{Introduction} \label{intro} \def100{100} \begin{figure*}[t] \centering \includegraphics[page=1]{figures.pdf} \caption{} \label{fig:teaser} \end{figure*} \IEEEPARstart{H}{igh}-quality 3D scene data has become increasingly available thanks to the growing popularity of consumer-grade depth sensors and tremendous progress in 3D scene reconstruction research \cite{Roth_BMVC_2012}, \cite{Shotton_CVPR_2013}, \cite{Xiao_ICCV_2013}, \cite{Zhou_CVPR_2014}. Such 3D data, if fully and well annotated, would be useful for powering different computer vision and graphics tasks such as scene understanding \cite{Valentin_CVPR_2013}, \cite{Hane_CVPR_2013}, object detection and recognition \cite{Wu_CVPR_2015}, and functionality reasoning in 3D space \cite{Gupta_CVPR_2011}. Scene segmentation and annotation refer to separating an input scene into meaningful objects. For example, the scene in Fig.~\ref{fig:teaser} can be segmented and annotated into chairs, table, etc. Literature has shown the crucial role of 2D annotation tools (e.g. \cite{Russell_IJCV_2008}) and 2D image datasets (e.g. \cite{Deng_CVPR_2009}, \cite{Everingham_IJCV_2010}, \cite{Xiao_CVPR_2010}) in various computer vision problems such as semantic segmentation, object detection and recognition \cite{Torralba_PAMI_2008}, \cite{Deng_ECCV_2010}. This inspires us for such tasks on 3D scene data. However, segmentation and annotation of 3D scenes require much more effort due to the large scale of the 3D data (e.g. there are millions of 3D points in a reconstructed scene). Development of a robust tool to facilitate the segmentation and annotation of 3D scenes thus is a demand and also the aim of this work. To this end, we make the following contributions: \begin{itemize} \item[$\bullet$] We propose an interactive framework that effectively couples the geometric and appearance information from multi-view RGB data. The framework is able to automatically perform 3D scene segmentation. \item[$\bullet$] Our tool is facilitated with a 2D segmentation algorithm based on 3D segmentation. \item[$\bullet$] We develop assistive user-interactive operations that allow users to flexibly manipulate scenes and objects in both 3D and 2D. Users co-operate with the tool by refining the segmentation and providing semantic annotation. \item[$\bullet$] To further assist users in annotation, we propose an object search algorithm which automatically segments and annotates repetitive objects defined by users. \item[$\bullet$] We create a dataset with more than hundred scenes. All the scenes are fully segmented and annotated using our tool. We refer readers to~\cite{Hua_Scenenn_3DV_2016} for more details and proof-of-concept applications using the dataset. \end{itemize} Compared with existing works on RGB-D segmentation and annotation (e.g. \cite{Ren_CVPR_2012}, \cite{Gupta_CVPR_2013}), our tool offers several advantages. First, in our tool, segmentation and annotation are centralized in 3D which free users from manipulating thousands of images. Second, the tool can adapt with either RGB-D images or the triangular mesh of a scene as the input. This enables the tool to handle meshes reconstructed from either RGB-D images \cite{Choi_CVPR_2015} or structure-from-motion \cite{Jancosek_CVPR_2011} in a unified framework. We note that interactive annotation has also been exploited in a few concurrent works, e.g. SemanticPaint in \cite{Valentin_TOG_2015} and Semantic Paintbrush in \cite{Miksik_CHI_2015}. However, those systems can only handle scenes that are partially captured at hand and contain a few of objects to be annotated. In contrast, our annotation tool handles complete 3D scenes and is able to work with pre-captured data. Our collected scenes are more complex with a variety of objects. Moreover, the SemanticPaint \cite{Valentin_TOG_2015} requires physical touching for the interaction and hence limits its capability to touchable objects. Meanwhile, objects at different scales can be annotated using our tool. In addition, the tool also supports 2D segmentation which is not available in both SemanticPaint \cite{Valentin_TOG_2015} and Semantic Paintbrush \cite{Miksik_CHI_2015}. \section{Related Work} \label{sec:relatedwork} \textbf{RGB-D Segmentation.} A common approach for scene segmentation is to perform the segmentation on RGB-D images and use object classifiers for labeling the segmentation results. Examples of this approach can be found in \cite{Ren_CVPR_2012}, \cite{Gupta_CVPR_2013}. The spatial relationships between objects can also be exploited to infer the scene labels. For example, Jia et al. \cite{Jia_CVPR_2013} used object layout rules for scene labeling. The spatial relationship between objects was modeled by a conditional random field (CRF) in \cite{Lin_ICCV_2013, Kim_ICCV_2013} and directed graph in \cite{Wong_CGF_2015}. In general, the above methods make use of RGB-D images captured from a single viewpoint of a 3D scene and thus could partially annotate the scene. Compared with those methods, our tool can achieve more complete segmentation results with the 3D models of the scene and its objects.\\ \noindent\textbf{From 2D to 3D Labeling.} Compared with 2D labels, 3D labels are often desired as they provide a more comprehensive understanding of the real world. 3D labels can be propagated by back-projecting 2D labels from image domain to 3D space. For example, Wang et al. \cite{Wang_CVPR_2013} used the labels provided in the ImageNet \cite{Deng_CVPR_2009} to infer 3D labels. In \cite{Xiao_ICCV_2013}, 2D labels were obtained by drawing polygons. Labeling directly on images is time consuming. Typically, a few thousands of images need to be handled. It is possible to perform matching among the images to propagate the annotations from one image to another, e.g. \cite{Xiao_ICCV_2013}, but this process is less reliable.\\ \begin{figure*}[t] \centering \includegraphics[page=2]{figures.pdf} \caption{} \label{fig:flowchart} \end{figure*} \noindent\textbf{3D Object Templates.} 3D object templates can be used to segment 3D scenes. The templates can be organized in holistic models, e.g., \cite{Kim_TOG_2012}, \cite{Salas-Moreno_CVPR_2013}, \cite{Nan_TOG_2012}, \cite{Shao_TOG_2012}, or part-based models, e.g. \cite{Chen_TOG_2014}. The segmentation can be performed on 3D point clouds, e.g. \cite{Kim_TOG_2012}, \cite{Nan_TOG_2012}, \cite{Chen_TOG_2014}, or 3D patches, e.g. \cite{Shao_TOG_2012}, \cite{Salas-Moreno_CVPR_2013}, \cite{Zhang_TOG_2015}. Generally speaking, the above techniques require the template models to be known in advance. They do not fit well our interactive system in which the templates can be provided on the fly by users. In our tool, we propose to use shape matching to help users in the segmentation and annotation task. Shape matching does not require off-line training and is proved to perform efficiently in practice.\\ \noindent\textbf{Online Scene Understanding.} Recently, there are methods that directly combine 3D reconstruction with annotation to achieve online scene understanding. For example, SemanticPaint proposed in \cite{Valentin_TOG_2015} allowed users annotate a scene by touching objects of interest. A CRF was then constructed to model each indicated object and then used to parse the scene. The SemanticPaint was extended to the Semantic Paintbrush in \cite{Miksik_CHI_2015} for outdoor scenes annotation by exploiting the farther range of a stereo rig. In both \cite{Valentin_TOG_2015} and \cite{Miksik_CHI_2015}, annotated objects and user-specified objects are assumed to have similar appearance (e.g. color). Furthermore, since the CRF models are built upon the reconstructed data, it is implicitly assumed that the reconstructed data is good enough so that the CRF model constructed from the user-specified object and that of the objects to be annotated have consistent geometric representation. However, the point cloud of the scene is often incomplete, e.g. there are holes. To deal with this issue, we describe the geometric shape of 3D objects using a shape descriptor which is robust to shape variation and occlusion. Experimental results show that our approach works well under noisy data (e.g. broken mesh) and robustly deal with shape deformation while being efficient for practical use. Online interactive labeling is a trend for scene segmentation and annotation in which the scalability and convenience of the user interface are important factors. In \cite{Valentin_TOG_2015}, the annotation can only be done for objects that are physically touchable and hence is limited to partial scenes. In this sense, we believe that our tool would facilitate the creation of large-scale, complete, and semantically annotated 3D scene datasets. \section{System Overview} \label{sec:systemoverview} Fig.~\ref{fig:flowchart} shows the workflow of our tool. The tool includes four main stages: scene reconstruction, automatic 3D segmentation, interactive refinement and annotation, and 2D segmentation. In the first stage (section~\ref{sec:reconstruction}), the system takes a sequence of RGB-D frames and reconstructs a triangular mesh, called \emph{3D scene mesh}. After the reconstruction, we compute and cache the correspondences between the 3D vertices in the reconstructed scene and the 2D pixels on all input frames. This allows seamless switching between segmentation in 3D and 2D in later steps. In the second stage (section~\ref{sec:segmentation}), the 3D scene mesh is automatically segmented. We start by clustering the mesh vertices into supervertices (section~\ref{sec:graphcut}). Next, we group the supervertices into regions (section~\ref{sec:MRF}). We also cache the results of both the steps for later use. The third stage (section~\ref{sec:refinement}) of the system is designed for users to interact with the system. We design three segmentation refinement operations: \emph{merge}, \emph{extract}, and \emph{split}. After refinement, users can make semantic annotation for objects in the scene. To further assist users in segmentation and annotation of repetitive objects, we propose an algorithm to automatically search for repetitive objects specified by a template (section~\ref{sec:objectsearch}). We extend the well-known 2D shape context \cite{Belongie_PAMI_2002} to 3D space and apply shape matching to implement this functionality. The fourth stage of the framework (section~\ref{sec:2Dsegmentation}) is designed for segmentation of 2D frames. In this stage, we devise an algorithm that uses the segmentation results in 3D as initiative for the segmentation on 2D and bases on contour matching. \section{Scene Reconstruction} \label{sec:reconstruction} \subsection{Geometry reconstruction} \label{sec:georeconstruction} Several techniques have been developed for 3D scene reconstruction. For example, KinectFusion~\cite{Newcombe_ISMAR_2011} applied frame-to-model alignment to fuse depth information and visualize 3D scenes in real time. However, KinectFusion tends to cause drift where depth maps are not accurately aligned due to accumulation of registration errors over time. Several attempts have been made to avoid drift and led to significant improvements in high-quality 3D reconstruction. For example, Xiao et al.~\cite{Xiao_ICCV_2013} added object constraints to correct misaligned reconstructions. Zhou et al.~\cite{Zhou_TOG_2013}, \cite{Zhou_CVPR_2014} split input frames into small chunks, each of which could be accurately reconstructed using a standard SLAM system like KinectFusion. An optimization was then performed to register all the chunks into the same coordinate frame. In robotics, SLAM systems also detect re-visiting places and trigger a loop closure constraint to enforce global consistency of camera poses. In this work, we adopt the system in~\cite{Zhou_CVPR_2014}, \cite{Choi_CVPR_2015} to calculate camera poses. Given the camera poses, the triangular mesh of a scene can be extracted using the marching cubes algorithm \cite{Roth_BMVC_2012}. We also store the camera pose of each input frame for computing 3D-2D correspondences. The normal of each mesh vertex is given by the area-weighted average over the normals of its neighbor surfaces. We further smooth the resulting normals using a bilateral filter. \subsection{3D-2D Correspondence} \label{sec:3D2Dcorrespondence} Given the reconstructed 3D scene, we align the whole sequence of 2D frames with the 3D scene using the corresponding camera poses obtained from section~\ref{sec:georeconstruction}. For each vertex, the normal is computed directly on the 3D mesh and its color is estimated as the median of the color of the corresponding pixels on 2D frames. \section{Segmentation in 3D} \label{sec:segmentation} After the reconstruction, a scene mesh typically consists of millions of vertices. In this stage, those vertices are segmented into much fewer regions. To achieve this, we first divide the reconstructed scene into a number of so-called supervertices by applying a purely geometry-based segmentation method. We then merge the supervertices into larger regions by considering both surface normals and colors. We keep all the supervertices and regions for later use. In addition, the hierarchical structures of the regions, supervertices, and mesh vertices (e.g. list of mesh vertices composing a supervertex) are also recorded. \subsection{Graph-based Segmentation} \label{sec:graphcut} We extend the efficient graph-based image segmentation algorithm of Felzenszwalb et al. \cite{Felzenszwalb_IJCV_2004} to 3D space. Specifically, the algorithm operates on a graph defined by the scene mesh in which each node in the graph corresponds to a vertex in the mesh. Two nodes in the graph are linked by an edge if their two corresponding vertices in the mesh are the vertices of a triangle. Let $\mathbf{V} = \{\mathbf{v}_i\}$ be the set of vertices in the mesh. The edge connecting two vertices $\mathbf{v}_i$ and $\mathbf{v}_j$ is weighted as \begin{equation} \label{eq:edgeweight} w(\mathbf{v}_i, \mathbf{v}_j) = 1 - {{\mathbf{n}}_i}^\top {\mathbf{n}}_j, \end{equation} where ${\mathbf{n}}_i$ and ${\mathbf{n}}_j$ are the unit normals of $\mathbf{v}_i$ and $\mathbf{v}_j$ respectively. The graph-based segmenter in \cite{Felzenszwalb_IJCV_2004} employs a number of parameters including a smoothing factor used for noise filtering (normals in our case), a threshold representing the contrast between adjacent regions, and the minimum size of segmented regions. In our implementation, those parameters were set to 0.5, 500, and 20 respectively. However, we also make those parameters available to users for customization. The graph-based segmentation algorithm results in a set of supervertices $\mathcal{S} = \{s_i\}$. Each supervertex is a group of geometrically homogeneous vertices with similar surface normals. The bottom left image in Fig.~\ref{fig:flowchart} shows an example of the supervertices. More examples can be found in Fig.~\ref{fig:results1} and Fig.~\ref{fig:results2}. \subsection{MRF-based Segmentation} \label{sec:MRF} The graph-based segmentation often produces a large number (e.g. few thousands) of supervertices which could require considerable effort for annotation. To reduce this burden, the supervertices are clustered into regions via optimizing an MRF model. In particular, for each supervertex $s_i \in \mathcal{S}$, the color and normal of $s_i$, denoted as $\bar{\textbf{c}}_i$ and $\bar{\textbf{n}}_i$, are computed as the means of the color values and normals of all vertices $\mathbf{v} \in s_i$. Each supervertex $s_i \in \mathcal{S}$ is then represented by a node $o_i$ in an MRF. Two nodes $o_i$ and $o_j$ are directly connected if $s_i$ and $s_j$ share some common boundary (i.e. $s_i$ and $s_j$ are adjacent supervertices). Let $l_i$ be the label of $o_i$, the unary potentials are defined as \begin{equation} \label{eq:likelihood} \psi_1(o_i, l_i) = -\log \mathcal{G}^c_i(\bar{\textbf{c}}_i, \boldsymbol\mu^c_{l_i}, \boldsymbol\Sigma^c_{l_i}) - \log \mathcal{G}^n_i(\bar{\textbf{n}}_i, \boldsymbol\mu^n_{l_i}, \boldsymbol\Sigma^n_{l_i}), \end{equation} where $\mathcal{G}^c_{l_i}$ and $\mathcal{G}^n_{l_i}$ are the Gaussians of the color values and normals of the label class of $l_i$, $\boldsymbol\mu^c_{l_i}/\boldsymbol\mu^n_{l_i}$ and $\boldsymbol\Sigma^c_{l_i}/\boldsymbol\Sigma^n_{l_i}$ are the mean and covariance matrix of $\mathcal{G}^c_{l_i}/\mathcal{G}^n_{l_i}$. The pairwise potentials are defined as the Potts model \cite{Barker_PR_2000} \begin{align} \label{eq:prior} \psi_2(l_i, l_j) = \begin{cases} -1, & \mbox{if } l_i = l_j \\ 1, & \mbox{otherwise}. \end{cases} \end{align} Let $\mathcal{L}=\{l_1, l_2, ..., l_{|\mathcal{S}|}\}$ be the set of labels of supervertices. The optimal labels $\mathcal{L}^*$ is determined by \begin{align} \label{eq:energy} \mathcal{L}^* = \operatorname*{arg\,min}_{\mathcal{L}} \bigg[ \sum_{i} \psi_1(o_i, l_i) + \gamma \sum_{i,j} \psi_2(l_i, l_j) \bigg] \end{align} where $\gamma$ is weight factor set to 0.5 in our implementation. The optimization problem in (\ref{eq:energy}) is solved using the method in \cite{Barker_PR_2000}. In our implementation, the number of labels was initialized to the number of supervertices; each supervertex was assigned to a different label. Fig.~\ref{fig:flowchart} (bottom) shows the result of the MRF-based segmentation. More results of this step are presented in Fig.~\ref{fig:results1} and Fig.~\ref{fig:results2}. \section{Segmentation Refinement and Annotation in 3D} \label{sec:refinement} The automatic segmentation stage could produce over- and under- segmented regions. To resolve these issues, we design three operations: \emph{merge}, \emph{extract}, and \emph{split}. \textbf{Merge.} This operation is used to resolve over-segmentation. In particular, users identify over-segmented regions that need to be grouped by stroking on them. The merge operation is illustrated in the first row of Fig.~\ref{fig:segmentationrefinement}. \textbf{Extract.} This operation is designed to handle under-segmentation. In particular, users first select an under-segmented region, the supervertices composing the under-segmented region are retrieved. Users then can select a few supervertices and use the merge operation to group those supervertices to create a new region. Note that the supervertices are not recomputed. Instead, they are retrieved from the cache result in the graph-based segmentation step. The second row of Fig.~\ref{fig:segmentationrefinement} shows the extract operation. \textbf{Split.} In a few rare cases, the MRF-based segmentation may perform differently on different regions. This is probably because of the variation of the geometric shape and appearance of objects. For example, a scene may have chairs appearing in a unique color and other chairs each of which composes multiple colors. Therefore, a unique setting of the parameters in the MRF-based segmentation may not adapt to all objects. \begin{figure}[t] \centering \includegraphics[page=3]{figures.pdf} \caption{} \label{fig:segmentationrefinement} \end{figure} \begin{figure*}[t] \centering \includegraphics[page=4]{figures.pdf} \caption{} \label{fig:annotation} \end{figure*} To address this issue, we design a split operation enabling user-guided MRF-based segmentation. Specifically, users first select an under-segmented region by stroking on that region. The MRF-based segmentation is then invoked on the selected region with a small value of $\gamma$ (see (\ref{eq:energy})) to generate more grained regions. We then enforce a constraint such that the starting and ending point of the stroke belong to two different regions. For example, assume that $l_i$ and $l_j$ are the labels of two supervertices that respectively contain the starting and ending point of the stroke. To bias the objective function in (\ref{eq:energy}), $\psi_2(l_i, l_j)$ in (\ref{eq:prior}) is set to $-1$ when $l_i \neq l_j$, and to a large value (e.g. $10^9$) otherwise. By doing so, the optimization in (\ref{eq:energy}) would favor the case $l_i \neq l_j$. In other words, the supervertices at the starting and ending point are driven to separate regions. Note that the MRF-based segmentation is only re-executed on the selected region. Therefore, the split operation is fast and does not hinder user interaction. The third row of Fig.~\ref{fig:segmentationrefinement} illustrates the split operation. Through experiments we have found that most of the time, users perform merge and extract operations. Split operation is only used when extract operation is not able to handle severe under-segmentations but such cases are not common in practice. When all the 3D segmented regions have been refined, users can annotate the regions by providing the object type, e.g. coffee table, sofa chair. Fig.~\ref{fig:annotation} shows an example of using our tool for annotation. Note that users are free to navigate the scene in both 3D and 2D space. \section{Object Search} \label{sec:objectsearch} There may exist multiple instances of an object class in a scene, e.g. the nine chairs in Fig.~\ref{fig:shapematching}. To support labeling and annotating repetitive objects, users can define a template by selecting an existing region or multiple regions composing the template. Those regions are the results of the MRF-based segmentation or user refinement. Given the user-defined template, our system automatically searches for objects that are similar to the template. Note that the repetitive objects are not present as a single region. Instead, each repetitive object may be composed of multiple regions. For example, each chair in Fig.~\ref{fig:shapematching}(a) consists of different regions such as the back, seat, legs. Once a group of regions is found to well match with the template, the regions are merged into a single object and recommended to users for verification. We extend the 2D shape context proposed in \cite{Belongie_PAMI_2002} to describe 3D objects (section~\ref{sec:shapecontext}). Matching objects with the template is performed via comparing shape context descriptors (section~\ref{sec:shapematching}). The object search is then built upon the sliding-window object detection approach \cite{Dalal_CVPR_2005} (section~\ref{sec:searching}). \subsection{Shape Context} \label{sec:shapecontext} \begin{figure}[t] \centering \includegraphics[page=5]{figures.pdf} \caption{} \label{fig:3Dshapecontext} \end{figure} Shape context was proposed by Belongie et al. \cite{Belongie_PAMI_2002} as a 2D shape descriptor and is well-known for many desirable properties such as being discriminative, robust to shape deformation and transformation, and less sensitive to noise and partial occlusions. Those properties fit well our need for several reasons. First, reconstructed scene meshes could be incomplete and contain noisy surfaces. Second, occlusions may also appear due to the lack of sufficient images completely covering objects. Third, the tool is expected to adapt with the variation of object shapes, e.g. chairs with and without arms. In our work, a 3D object is represented by a set $\mathcal{V}$ of vertices obtained from the 3D reconstruction step. For each vertex $\mathbf{v}_i \in \mathcal{V}$, the shape context of $\mathbf{v}_i$ is denoted as $\mathbf{s}(\mathbf{v}_i)$ and represented by the histogram of the relative locations of other vertices $\mathbf{v}_j$, $j \neq i$, to $\mathbf{v}_i$. Let $\mathbf{u}_{ij} = \mathbf{v}_i - \mathbf{v}_j$. The relative location of a vertex $\mathbf{v}_j \in \mathcal{V}$ to $\mathbf{v}_i$ is encoded by the length $\| \mathbf{u}_{ij} \|$ and the spherical coordinate $(\theta, \phi)_{ij}$ of $\mathbf{u}_{ij}$. In our implementation, the lengths $\| \mathbf{u}_{ij} \|$ were quantized into 5 levels. To make the shape context $\mathbf{s}(\mathbf{v}_i)$ more sensitive to local deformations, $\| \mathbf{u}_{ij} \|$ were quantized in a log-scale space. The spherical angles $(\theta, \phi)_{ij}$ were quantized uniformly into 6 discrete values. Fig.~\ref{fig:3Dshapecontext} illustrates the 3D shape context. The shape context descriptor is endowed with scale-invariant by normalizing $\| \mathbf{u}_{ij} \|$ by the mean of the lengths of all vectors. To make the shape context rotation invariant, Kortgen et al. \cite{Kortgen_ESCG_2003} computed the spherical coordinates $(\theta, \phi)_{ij}$ relatively to the eigenvectors of the covariance matrix of all vertices. However, the eigenvectors may not be computed reliably for shapes having no dominant orientations, e.g. rounded objects. In addition, the eigenvectors are only informative when the shape is complete while our scene meshes may be incomplete. To overcome this issue, we establish a local coordinate frame at each vertex on a shape using its normal and tangent vector. The tangent vector of a vertex $\mathbf{v}_i$ is the one connecting $\mathbf{v}_i$ to the centroid of the shape. We have found this approach worked more reliably. Since a reconstructed scene often contains millions of vertices, prior to applying the object search, we uniformly sample a scene by $20,000$ points which result in objects of $100-200$ vertices. \subsection{Shape Matching} \label{sec:shapematching} Comparing (matching) two given shapes $\mathcal{V}$ and $\mathcal{Y}$ is to maximize the correspondences between pairs of vertices on these two shapes, i.e. minimizing the deformation of the two shapes in a point-wise fashion. The deformation cost between two vertices $\mathbf{v}_i \in \mathcal{V}$ and $\mathbf{y}_j \in \mathcal{Y}$ is measured by the $\chi^2(\mathbf{s}(\mathbf{v}_i), \mathbf{s}(\mathbf{y}_j))$ distance between the two corresponding shape context descriptors extracted at $\mathbf{v}_i$ and $\mathbf{y}_j$ as follow, \begin{align} \label{eq:chisquareddistance} \chi^2(\mathbf{s}(\mathbf{v}_i), \mathbf{s}(\mathbf{y}_j)) = \frac{1}{2}\sum_{b=1}^{\dim(\mathbf{s}(\mathbf{v}_i))} \frac{(\mathbf{s}(\mathbf{v}_i)[b]-\mathbf{s}(\mathbf{y}_j)[b])^2}{\mathbf{s}(\mathbf{v}_i)[b]+\mathbf{s}(\mathbf{y}_j)[b]} \end{align} where $\dim(\mathbf{s}(\mathbf{v}_i))$ is the dimension (i.e. the number of bins) of $\mathbf{s}(\mathbf{v}_i)$, $\mathbf{s}(\mathbf{v}_i)[b]$ is the value of $\mathbf{s}(\mathbf{v}_i)$ at the $b$-th bin. Given the deformation cost of every pair of vertices on two shapes $\mathcal{V}$ and $\mathcal{Y}$, shape matching can be solved using the shortest augmenting path algorithm \cite{Jonker_Computing_1987}. To make the matching algorithm adaptive to shapes with different number of vertices, ``dummy'' vertices are added. This enables the matching method to be robust to noisy data and partial occlusions. Formally, the deformation cost $C(\mathcal{V},\mathcal{Y})$ between two shapes $\mathcal{V}$ and $\mathcal{Y}$ is computed as, \begin{equation} \label{eq:deformationcost} C(\mathcal{V},\mathcal{Y}) = \sum_{\mathbf{v}_i \in \hat{\mathcal{V}}} \chi^2(\mathbf{s}(\mathbf{v}_i), \mathbf{s}(\pi(\mathbf{v}_i))) \end{equation} where $\hat{\mathcal{V}}$ is identical to $\mathcal{V}$ or augmented from $\mathcal{V}$ by adding dummy vertices and $\pi(\mathbf{v}_i) \in \hat{\mathcal{Y}}$ is the matching vertex of $\mathbf{v}_i$ determined by using \cite{Jonker_Computing_1987}. To further improve the matching, we also consider how well the two matching shapes are aligned. In particular, we first align $\mathcal{V}$ to $\mathcal{Y}$ using a rigid transformation. This rigid transformation is represented by a $4 \times 4$ matrix and estimated using the RANSAC algorithm that randomly picks three pairs of correspondences and determine the rotation and translation \cite{Horn_JOSA_1987}. We then compute an alignment error, \begin{equation} \label{eq:alignmenterror} E(\mathcal{V}, \mathcal{Y}) = \min \bigg\{ \sqrt{ \frac{1}{|\mathcal{V}|} \sum_{i=1}^{|\mathcal{V}|} \epsilon^{(\mathcal{V})}_i}, \sqrt{ \frac{1}{|\mathcal{Y}|} \sum_{i=1}^{|\mathcal{Y}|} \epsilon^{(\mathcal{Y})}_i} \bigg\} \end{equation} where \begin{equation} \epsilon^{(\mathcal{V})}_i = \begin{cases} \| \pi(\mathbf{v}_i) - T * \mathbf{v}_i \|^2 & \text{if } \pi(\mathbf{v}_i) \text{ exists for } \mathbf{v}_i \in \mathcal{V} \\ \Delta^2 & \text{otherwise} \end{cases} \end{equation} and, similarly for $\epsilon^{(\mathcal{Y})}_i$, where $T$ is the rigid transformation matrix and $\Delta$ is a large value used to penalize misalignments. A match is confirmed if: (i) $C(\mathcal{V}, \mathcal{Y}) < \tau_s$ and (ii) $E(\mathcal{V}, \mathcal{Y}) < \tau_a$ where $\tau_s$ and $\tau_a$ are thresholds. In our experiments, we set $\Delta = 2$ (meters), $\tau_s = 0.7$, $\tau_a = 0.4$. We have found that the object search method was not too sensitive to parameter settings while those settings achieved the best performance. \begin{figure*} \centering \includegraphics[page=6]{figures.pdf} \caption{} \label{fig:shapematching} \end{figure*} \subsection{Searching} \label{sec:searching} Object search can be performed based on the sliding-window approach \cite{Dalal_CVPR_2005}. Specifically, we take the 3D bounding box of the template and use it as the window to scan a 3D scene. At each location in the scene, all regions that intersect the window are considered for their possibility to be part of a matching object. However, it would be intractable to consider every possible combination of all regions. To deal with this issue, we propose a greedy algorithm that operates iteratively by adding and removing regions. \begin{algorithm} \caption{Grow-shrink procedure. $C$ and $E$ are the matching cost and alignment error defined in (\ref{eq:deformationcost}) and (\ref{eq:alignmenterror}).} \label{alg:findingobjs} \SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output} \SetKwFunction{PopUp}{PopUp} \SetKwData{True}{true} \SetKwData{False}{false} \SetKw{And}{and} \underline{function GrowShrink} $(\mathcal{R},\mathcal{W}, \mathcal{O})$\\ \Input{$\mathcal{R}$: set of regions to examine, \\ $\mathcal{W}$: window,\\ $\mathcal{O}$: user-defined template} \Output{$\mathcal{A}$: best matching object} \Begin{ $\mathcal{A} \leftarrow \mathcal{R}$\ \For{$i\leftarrow 1$ \KwTo $iterations$} { \tcp{grow} $M \leftarrow A$\ \For{$r \in \mathcal{W} \setminus \mathcal{M}$} { \If{$C(\mathcal{M} \cup \{r\}, \mathcal{O}) < C(\mathcal{A}, \mathcal{O})$ \And $E(\mathcal{M} \cup \{r\}, \mathcal{O}) < \tau_a$} { $\mathcal{A} \leftarrow \mathcal{M} \cup \{r\}$\ } } \tcp{shrink} $M \leftarrow A$\ \For{$r \in \mathcal{M}$} { \If{$C(\mathcal{M} \setminus \{r\}, \mathcal{O}) < C(\mathcal{A}, \mathcal{O})$ \And $E(\mathcal{M} \setminus \{r\}, \mathcal{O}) < \tau_a$} { $\mathcal{A} \leftarrow \mathcal{M} \setminus \{r\}$\ } } } \Return{$\mathcal{A}$} } \end{algorithm} The general idea is as follows. Let $\mathcal{R}$ be the set of regions that intersect the window $\mathcal{W}$, i.e. the 3D bounding box of the template. For a region $r \in \mathcal{W} \setminus \mathcal{R}$, we verify whether the object made by $\mathcal{R} \cup \{r\}$ could be more similar to the user-defined template $\mathcal{O}$ in comparison with $\mathcal{R}$. Similarly, for every region $r \in \mathcal{R}$ we also verify the object made by $\mathcal{R} \setminus \{r\}$. These adding and removing steps are performed interchangeably in a small number of iterations until the best matching result (i.e. a group of regions) is found. This procedure is called \emph{grow-shrink} and described in Algorithm~\ref{alg:findingobjs}. In our implementation, the spatial strides on the $x-$, $y-$, and $z-$ direction of the window $\mathcal{W}$ were set to the size of $\mathcal{W}$. The number of iterations in Algorithm~\ref{alg:findingobjs} was set to $10$, which resulted in satisfactory accuracy and efficiency. Since a region may be contained in more than one window, it may be verified multiple times in multiple groups of regions. To avoid this, if an object candidate is found in a window, its regions will not be considered in any other objects and any other windows. Fig.~\ref{fig:shapematching} illustrates the robustness of the object search in localizing repetitive objects under severe conditions (e.g. objects with incomplete shape). The search procedure may miss some objects. To handle such cases, we design an operation called \emph{guided merge}. In particular, after defining the template, users simply select one of the regions of a target object that is missed by the object search. The grow-shrink procedure is then applied on the selected region to seek a better match with the template. Fig.~\ref{fig:guidedmerge} shows an example of the guided merge operation. \begin{figure} \centering \includegraphics[page=8]{figures.pdf} \caption{} \label{fig:guidedmerge} \end{figure} \section{Segmentation on 2D} \label{sec:2Dsegmentation} Segmentation on 2D can be done by projecting regions in 3D space onto 2D frames. However, the projected regions may not well align with the true objects on 2D frames (see Fig.~\ref{fig:2Dseginterim}). There are several reasons for this issue. For example, the depth and color images used to reconstruct a scene might not be exactly aligned at object boundaries; the camera intrinsics are from factory settings and not well calibrated; camera registration during reconstruction exhibits drift. \begin{figure*} \centering \includegraphics[page=9]{figures.pdf} \caption{} \label{fig:2Dseginterim} \end{figure*} \begin{figure*} \centering \includegraphics[page=10]{figures.pdf} \caption{} \label{fig:misalignment} \end{figure*} To overcome this issue, we propose an alignment algorithm which aims to fit the boundaries of projected regions to true boundaries on 2D frames. The true boundaries on a 2D frame can be extracted using some edge detector (e.g. the Canny edge detector \cite{Canny_PAMI_1986}). Let $E=\{e_j\}$ denote the set of edge points on the edge map of a 2D frame. Let $U=\{u_i\}$ be the set of contour points of a projected object on that frame. $U$ is then ordered using the Moore neighbor tracing algorithm \cite{Narappanawar_CVIU_2011}. The ordering step is used to express the contour alignment problem in a form that dynamic programming can be applied for efficient implementation. At each contour point $u_i$, we consider a $21 \times 21$-pixel window centered at $u_i$ (in relative to a $640 \times 480$-pixel image). We then extract the histogram $h_{u_i}$ of the orientations of vectors $(u_i,u_k)$, $u_k \in U$ in the window. The orientations are uniformly quantized into 16 bins. We also perform this operation for edge points $e_j \in E$. The dissimilarity between the two local shapes at a contour point $u_i$ and edge point $e_j$ is computed as $\chi^2(h_{u_i}, h_{e_j})$ (similarly to (\ref{eq:chisquareddistance})). We also consider the continuity and smoothness of contours. In particular, the continuity between two adjacent points $u_{i}$ and $u_{i-1}$ is defined as $\|u_{i} - u_{i-1}\|$. The smoothness of a fragment including three consecutive points $u_i$, $u_{i-1}$, $u_{i-2}$ is computed as $\cos(u_i - u_{i-1}, u_{i-2}-u_{i-1})$ where $u_i - u_{i-1}$ and $u_{i-2}-u_{i-1}$ denote the vectors connecting $u_{i-1}$ to $u_i$ and connecting $u_{i-1}$ to $u_{i-2}$ respectively, and $\cos(\cdot, \cdot)$ is the cosine of the angle formed by these two vectors. Alignment of $U$ to $E$ is to identify a mapping function $f: U \rightarrow E$ that maps a contour point $u_i \in U$ to an edge point $e_j \in E$ so as to, \begin{align} \label{eq:alignmentobjective} &\mbox{minimize} \bigg[ \sum_{i=1}^{|U|} \chi^2(h_{u_i}, h_{f(u_i)}) \notag \\ &+ \kappa_1 \sum_{i=2}^{|U|} \|f(u_{i}) - f(u_{i-1})\| \notag \\ &+ \kappa_2 \sum_{i=3}^{|U|} \cos(f(u_i) - f(u_{i-1}), f(u_{i-2})-f(u_{i-1})) \bigg] \end{align} The optimization problem in (\ref{eq:alignmentobjective}) can be considered as the bipartite graph matching problem \cite{Jonker_Computing_1987}. However, since $U$ is ordered, this optimization can be solved efficiently using dynamic programming \cite{Thayananthan_CVPR_2003}. In particular, denoting $m_{i,j}=\chi^2(h_{u_i}, h_{e_j})$, $f_i=f(u_i)$, $f_{i,j}=f(u_i)-f(u_j)$, the objective function in (\ref{eq:alignmentobjective}) can be rewritten as, \begin{align} \label{eq:alignmentdynamicprogramming} \mathcal{F}_i &= \begin{cases} \min_{j \in E} \{\mathcal{F}_{i-1} + m_{i,j} + \kappa_1 \|f_{i,i-1}\| \\ + \kappa_2 \cos(f_{i,i-1},f_{i-2,i-1})\}, & \mbox{if }i>2\\ \min_{j \in E} \{\mathcal{F}_{i-1} + m_{i,j} + \kappa_1 \|f_{i,i-1}\| \}, & \mbox{if }i=2\\ \min_{j \in E} \{m_{i,j}\}, & \mbox{if }i=1 \end{cases} \end{align} where $\kappa_1$ and $\kappa_2$ are user parameters. We have tried $\kappa_1$ and $\kappa_2$ with various values and found that $\kappa_1=0.1$ and $\kappa_2=3.0$ often produced good results. In (\ref{eq:alignmentdynamicprogramming}), for each contour point $u_i$, all edge points are verified for a match. However, this exhausted search is not necessary since the misalignment only occurs at a certain amount. To save the computational cost, we limit the search space for each contour point $u_i$ by considering only its $k$ nearest edge points whose the distance to $u_i$ is less than a distance $\delta$. In our experiment, $\delta$ was set to $10\%$ of the maximum of the image dimension, e.g., $\delta=48$ for a $640\times480$-pixel image. The number of nearest edge points (i.e. $k$) was set to 30. Fig.~\ref{fig:2Dseginterim}(c) shows an example of contour alignment by optimizing (\ref{eq:alignmentdynamicprogramming}) using dynamic programming. We have also verified the contribution of the continuity and smoothness. Fig.~\ref{fig:misalignment} shows the results when the cues are used individually and in combination. The results show that, when all the cues are taken into account, the contours are mostly well aligned with the true object boundaries. It is noticed that the seat of the green chair is not correctly recovered. We have found that this is because the Canny's detector missed important edges on the boundaries of the chair. Users are also free to edit the alignment results. \section{Experiments} \label{sec:experiments} We present the dataset on which experiments were conducted in section~\ref{sec:data}. We evaluate the 3D segmentation in section~\ref{sec:3Dsegmentationevaluation}. The object search is evaluated in section~\ref{sec:objectsearchevaluation}. Experimental results of the 2D segmentation are finally presented in section~\ref{sec:2Dsegmentationevaluation}. \subsection{Dataset} \label{sec:data} \begin{figure*} \centering \includegraphics[page=11]{figures.pdf} \caption{} \label{fig:results1} \end{figure*} \begin{figure*} \centering \includegraphics[page=12]{figures.pdf} \caption{} \label{fig:results2} \end{figure*} \begin{table*}[!ht] \caption{\small Comparison of the graph-based segmentation and MRF-based segmentation. For our captured scenes, the statistical data is the average numbers calculated over all the scenes. Note that for user refined results, the numbers of objects annotated are fewer than the numbers of labels (i.e. segments). This is because the annotation was done only for objects that are common in practice.} \label{tab:3DSeg} \centering \small \begin{tabular}{lrrrrrrrr} \toprule & & \multicolumn{2}{c}{Graph-based} & \multicolumn{2}{c}{MRF-based} & \multicolumn{2}{c}{User refined} & Interactive time\\ \cmidrule{3-8} Scene & \#Vertices & \#Supervertices & OCE & \#Regions & OCE & \#Labels & \#Objects & (in minutes) \\ \midrule \emph{copyroom} & 1,309,421 & 1,996 & 0.92 & 347 & 0.73 & 157 & 15 & 19 \\ \emph{lounge} & 1,597,553 & 2,554 & 0.97 & 506 & 0.93 & 53 & 12 & 16 \\ \emph{hotel} & 3,572,776 & 13,839 & 0.98 & 1433 & 0.88 & 96 & 21 & 27 \\ \emph{dorm} & 1,823,483 & 3,276 & 0.97 & 363 & 0.78 & 75 & 10 & 15 \\ \emph{kitchen} & 2,557,593 & 4,640 & 0.97 & 470 & 0.85 & 75 & 24 & 23 \\ \emph{office} & 2,349,679 & 4,026 & 0.97 & 422 & 0.84 & 69 & 19 & 24 \\ \midrule Our scenes & 1,450,748 & 2,498 & 0.93 & 481 & 0.77 & 179 & 19 & 30 \\ \bottomrule \end{tabular} \end{table*} We created a dataset consisting of over 100 scenes. The dataset includes six scenes from publicly available datasets: the \textit{copyroom} and \textit{lounge} from the Stanford dataset \cite{Zhou_TOG_2013}, the \textit{hotel} and \textit{dorm} from the SUN3D \cite{Xiao_ICCV_2013}, and the \textit{kitchen} and \textit{office} sequences from the Microsoft dataset \cite{Shotton_CVPR_2013}. The Stanford and SUN3D dataset also provide registered RGB and depth image pairs. These datasets also include the camera pose data. In addition to existing scenes, we collected 100 scenes using Asus Xtion and Microsoft Kinect v2. Our scenes were captured from the campus of the University of Massachusetts Boston and the Singapore University of Technology and Design. These scenes were captured from various locations such as lecturer rooms, theatres, university hall, library, computer labs, dormitory, etc. All the scenes were then fully segmented and annotated using our tool. The dataset also includes the camera pose information. Fig.~\ref{fig:results1} and Fig.~\ref{fig:results2} show the six scenes collected from the public datasets and several of our collected scenes. \subsection{Evaluation of 3D Segmentation} \label{sec:3Dsegmentationevaluation} We evaluated the impact of the graph-based and MRF-based segmentation on our dataset. We considered the annotated results obtained using our tool as the ground-truth. To measure the segmentation performance, we extended the object-level consistency error (OCE), the image segmentation evaluation metric proposed in \cite{Polak_IVC_2009} to 3D vertices. Essentially, the OCE reflects the coincidence of pixels/vertices of segmented regions and ground-truth regions. As indicated in \cite{Polak_IVC_2009}, compared with other segmentation evaluation metrics (e.g. the global and local consistency error in \cite{Martin_ICCV_2001}), the OCE considers both over- and under-segmentation errors in a single measure. In addition, OCE can quantify the accuracy of multi-object segmentation and thus it fits well our evaluation purpose. Table~\ref{tab:3DSeg} summarizes the OCE of the graph-based and MRF-based segmentation. As shown in the table, compared with the graph-based segmentation, the segmentation accuracy is significantly improved by the MRF-based segmentation. It is also noticeable on the reduction of the quantity of the 3D vertices to the supervertices and the regions. However, experimental results also show that the segmentation results generated automatically are still not approaching the quality made by human beings. Thus, user interactions are necessary. This is because of two reasons. First, both the graph-based and MRF-based segmentation aim to segment a 3D scene into homogenous regions/surfaces rather than semantical objects. Second, the semantic segmentation done by users are subjective. For example, one may consider a pot and a plant growing on it as two separate objects or as a single object. After user interaction, the number of final labels are typically less than a hundred. The number of semantic objects is around 10 to 20 in most of the cases. Note that the numbers of final labels and semantic objects are not identical. This is because there could have labels whose semantics is not well defined, e.g. miscellaneous items on a table or some small labels appeared as noise in the 3D reconstruction. We also measured the time required for user interactions using our tool. This information is reported in the last column of Table~\ref{tab:3DSeg}. As shown in the table, with the assistance of the tool, complex 3D scenes (with millions of vertices) could be completely segmented and annotated in less than 30 minutes, as opposed to approximately few hours to be done manually. Note that the interactive time is subjective to user's experience. Several results of our tool on the public datasets and our collected dataset are shown in Fig.~\ref{fig:results1} and Fig.~\ref{fig:results2}. Through experiments we have found that although our tool was able to work with most reconstructed scenes in reasonable processing time, it failed at a few locally rough terrains, e.g. the 3D mesh outer boundaries and the pothole areas made by loop closure. Enhancing broken surfaces and completing missing object parts will be our future work. \subsection{Evaluation of Object Search} \label{sec:objectsearchevaluation} To evaluate the object search functionality, we collected a set of 45 objects from our dataset. Those objects were selected so that they are semantical and common in practice and their shapes are discriminative. For example, drawers of cabinets were not selected since they were present in flat surfaces which could be easily found in many structures, e.g. walls, pictures, etc. For each scene and each object class (e.g. chair), each object in the class was used as the template while the remaining repetitive objects of the same class were considered as the ground-truth. The object search was then applied to find repetitive objects given the template. We used the precision, recall, and $F$-measure to evaluate the performance of the object search. The intersection over union (IoU) metric proposed in \cite{Everingham_IJCV_2010} for object detection was used as the criterion to determine true detections and false alarms. However, instead of computing the IoU on the bounding boxes of objects as in \cite{Everingham_IJCV_2010}, we entailed the IoU at point-level (i.e. 3D vertices from the mesh). This is because our aim is not only to localize repetitive objects but also to segment them. In particular, an object $\mathcal{O}$ (a set of vertices) formed by the object search procedure is considered as true detection if there exists an annotated object $\mathcal{R}$ in the ground-truth such that \begin{equation} \frac{|\mathcal{O} \cap \mathcal{R}|}{|\mathcal{O} \cup \mathcal{R}|} > 0.5 \end{equation} where $|\cdot|$ denotes the area; the value 0.5 is often used in object detection evaluation (e.g. \cite{Everingham_IJCV_2010}). The evaluation was performed on every template. The precision, recall, and $F$-measure ($=2\times\frac{Precision\times Recall}{Precision+Recall}$) were then averaged over all evaluations. Table~\ref{tab:objectsearch} shows the averaged precision, recall, and $F$-measure of the object search. As shown, the tool can localize and segment 70\% of repetitive objects with 69\% precision and 65\% $F$-measure. We also tested the object search without considering the alignment error (i.e. $E$ in \ref{eq:alignmenterror})). Experimental results show that, compared with the solely use of shape context dissimilarity score (i.e. $C$ in (\ref{eq:deformationcost})), while the augmentation of alignment error could slightly incur a loss of the detection rate (about 2\%), it largely improved the precision (from 22\% to 69\%). This led to a significant increase of the $F$-measure (from 30\% to 65\%). Our experimental results show that, the object search worked efficiently with templates represented by about 200 points. For example, for the scene presented in Fig.~\ref{fig:shapematching}, the object search was completed within 15 seconds with a 150-point template and on a machine equipped by an Intel(R) Core(TM) i7 2.10 GHz CPU and 32 GB of memory. In practice, threads can be used to run the object search in the background while users are performing interactions. \begin{table}[!ht] \caption{\small Performance of the proposed object search.} \label{tab:objectsearch} \centering \small \begin{tabular}{lccc} \toprule & Precision & Recall & $F$-measure \\ \midrule Without alignment error & 0.22 & \textbf{0.72} & 0.30 \\ With alignment error & \textbf{0.69} & 0.70 & \textbf{0.65} \\ \bottomrule \end{tabular} \end{table} \subsection{Evaluation of 2D Segmentation} \label{sec:2Dsegmentationevaluation} We also evaluated the performance of the segmentation on 2D using the OCE metric. This experiment was conducted on the \textit{dorm} sequence from the SUN3D dataset \cite{Xiao_ICCV_2013}. The \textit{dorm} sequence contained 58 images in which the ground-truth labels were manually crafted and publicly available. We report the segmentation performance obtained by projecting 3D regions onto 2D images and by applying the our alignment algorithm in Table~\ref{tab:2DSeg}. The impact of the local shape, continuity, and smoothness is also quantified. As shown in Table~\ref{tab:2DSeg}, the combination of the local shape, continuity, and smoothness achieves the best performance. We have visually found the alignment algorithm could make projected contours smoother and closer to true edges and this would be more convenient for users to edit 2D segmentation results. Experimental results show our alignment algorithm worked efficiently. On the average, the alignment could be done in about 1 second for a $640 \times 480$-pixel frame. \begin{table}[!ht] \caption{\small Comparison of different segmentation methods.} \label{tab:2DSeg} \centering \small \begin{tabular}{lc} \toprule Segmentation method & OCE \\ \midrule Projection & 0.57 \\ Local shape & 0.60 \\ Local shape + Continuity & 0.55 \\ Local shape + Smoothness & 0.55 \\ Local shape + Continuity + Smoothness & \textbf{0.54} \\ \bottomrule \end{tabular} \end{table} \section{Conclusion} \label{sec:conclusion} This paper proposed a robust tool for segmentation and annotation of 3D scenes. The tool couples the geometric information from 3D space and color information from multi-view 2D images in an interactive framework. To enhance the usability of the tool, we developed assistive user-interactive operations that allow users to flexibly manipulate scenes and objects in both 3D and 2D space. The tool is also facilitated with automated functionalities such as scene and image segmentation, object search for semantic annotation. Along with the tool, we created a dataset of more than 100 scenes. All the scenes were annotated using our tool. The newly created dataset was also used to verify the tool. The overall performance of the tool depends on the quality of 3D reconstruction. Improving the quality of 3D meshes by recovering broken surfaces and missing object parts will be our future work. \section*{Acknowledgment} Lap-Fai Yu is supported by the University of Massachusetts Boston StartUp Grant P20150000029280 and by the Joseph P. Healey Research Grant Program provided by the Office of the Vice Provost for Research and Strategic Initiatives \& Dean of Graduate Studies of the University of Massachusetts Boston. This research is supported by the National Science Foundation under award number 1565978. We also acknowledge NVIDIA Corporation for graphics card donation. Sai-Kit Yeung is supported by Singapore MOE Academic Research Fund MOE2013-T2-1-159 and SUTD-MIT International Design Center Grant IDG31300106. We acknowledge the support of the SUTD Digital Manufacturing and Design (DManD) Centre which is supported by the National Research Foundation (NRF) of Singapore. This research is also supported by the National Research Foundation, Prime Minister's Office, Singapore under its IDM Futures Funding Initiative. Finally, we slenderly thank Fangyu Lin for assisting data capture and Guoxuan Zhang for the early version of the tool. \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{IEEEtran}
train/arxiv
BkiUbZXxaKgS2Kkr4T0C
5
1
"\\section{Introduction}\nInflation \\cite{PhysRevD.23.347,starinf,Linde:1981mu} is an exponentially(...TRUNCATED)
train/arxiv
BkiUdhY5qhLBCSMaNgIz
5
1
"\\section{Introduction}\n\nPresently it is well established by QCD lattice calculations \\cite{Aoki(...TRUNCATED)
train/arxiv
BkiUbkM241xg-HfksElA
5
1
"\\section{Introduction} \n\nSimulations of Cold Dark Matter (CDM) models predict far more dark matt(...TRUNCATED)
train/arxiv
BkiUdPQ4eIZjn9vgH1vw
5
1
"\\section{Introduction}\nScene text recognition is a fundamental and important task in computer vis(...TRUNCATED)
train/arxiv
BkiUd-05qsMAIpX-f9JH
5
1
"\\section{Introduction}\n\\label{sec_intr}\n\nFollowing Flory's ``ideality hypothesis\" \\cite{Flor(...TRUNCATED)
train/arxiv
BkiUeQTxK1Thg9qFea7m
5
1
"\\section{Introduction}\n\nAlthough we now have the ability to discover galaxies as distant as z $\(...TRUNCATED)
train/arxiv
BkiUeAs4eIXh79703Jhz
5
1
"\\section{Introduction}\n\\IEEEPARstart{D}{ynamic} choice models, wherein the subsequent choice of (...TRUNCATED)
train/arxiv
BkiUdFg5jDKDx7Q7NLMx
5
1
"\\section{Introduction}\n\\label{sec:introduction}\n \nA real option is the right, but not the obli(...TRUNCATED)
train/arxiv
End of preview. Expand in Data Studio

Top 30B token SlimPajama Subset selected by the Professionalism rater

This repository contains the dataset described in the paper Meta-rater: A Multi-dimensional Data Selection Method for Pre-training Language Models.

Code: https://github.com/opendatalab/Meta-rater

Dataset Description

This dataset contains the top 30B tokens from the SlimPajama-627B corpus, selected using the Professionalism dimension of the PRRC (Professionalism, Readability, Reasoning, Cleanliness) framework. Each document in this subset is scored and filtered by a ModernBERT-based rater fine-tuned to assess the degree of professional knowledge and expertise required to comprehend the text.

  • Source: SlimPajama-627B Annotated Dataset
  • Selection: Top 30B tokens by PRRC-Professionalism score
  • Quality metric: Professionalism (0–5 scale, see below)
  • Annotation coverage: 100% of selected subset

Dataset Statistics

  • Total tokens: 30B (subset of SlimPajama-627B)
  • Selection method: Top-ranked by PRRC-Professionalism ModernBERT rater
  • Domains: Same as SlimPajama (CommonCrawl, C4, GitHub, Books, ArXiv, Wikipedia, StackExchange)
  • Annotation: Each document has a professionalism score (0–5)

Professionalism Quality Metric

Professionalism measures the degree of expertise and prerequisite knowledge required to comprehend the text. Higher scores indicate content that is more technical, specialized, or advanced, while lower scores reflect general or layperson-accessible material.

  • 0–1: Minimal technical knowledge required (e.g., children's books, basic web content)
  • 2–3: Some specialized knowledge or depth (e.g., popular science, detailed articles)
  • 4–5: High expertise required (e.g., academic papers, technical manuals)

Scores are assigned by a ModernBERT model fine-tuned on Llama-3.3-70B-Instruct annotations, as described in the Meta-rater paper.

Annotation Process

  • Initial annotation: Llama-3.3-70B-Instruct rated 500k+ SlimPajama samples for professionalism
  • Model training: ModernBERT fine-tuned on these annotations
  • Scoring: All SlimPajama documents scored by ModernBERT; top 30B tokens selected

Citation

If you use this dataset, please cite:

@article{zhuang2025meta,
  title={Meta-rater: A Multi-dimensional Data Selection Method for Pre-training Language Models},
  author={Zhuang, Xinlin and Peng, Jiahui and Ma, Ren and Wang, Yinfan and Bai, Tianyi and Wei, Xingjian and Qiu, Jiantao and Zhang, Chi and Qian, Ying and He, Conghui},
  journal={arXiv preprint arXiv:2504.14194},
  year={2025}
}

License

This dataset is released under the same license as the original SlimPajama dataset. See the original SlimPajama repository for details.

Contact


Made with ❤️ by the OpenDataLab team

Downloads last month
18