Dataset Viewer (First 5GB)
text
stringlengths 14
1.76M
|
---|
# A Particle Filtering Framework for Integrity Risk of GNSS-Camera Sensor
Fusion
Adyasha Mohanty, Shubh Gupta and Grace Xingxin Gao
Stanford University
## BIOGRAPHIES
Adyasha Mohanty is a graduate student in the Department of Aeronautics and
Astronautics at Stanford University. She graduated with a B.S. in Aerospace
Engineering from Georgia Institute of Technology in 2019.
Shubh Gupta is a graduate student in the Department of Electrical Engineering
at Stanford University. He received his B.Tech degree in Electrical
Engineering with a minor in Computer Science from the Indian Institute of
Technology Kanpur in 2018.
Grace Xingxin Gao is an assistant professor in the Department of Aeronautics
and Astronautics at Stanford University. Before joining Stanford University,
she was faculty at University of Illinois at Urbana-Champaign. She obtained
her Ph.D. degree at Stanford University. Her research is on robust and secure
positioning, navigation and timing with applications to manned and unmanned
aerial vehicles, robotics, and power systems.
## ABSTRACT
Adopting a joint approach towards state estimation and integrity monitoring
results in unbiased integrity monitoring unlike traditional approaches. So
far, a joint approach was used in Particle RAIM [1] for GNSS measurements
only. In our work, we extend Particle RAIM to a GNSS-camera fused system for
joint state estimation and integrity monitoring. To account for vision faults,
we derive a probability distribution over position from camera images using
map-matching. We formulate a Kullback-Leibler Divergence [2] metric to assess
the consistency of GNSS and camera measurements and mitigate faults during
sensor fusion. The derived integrity risk upper bounds the probability of
Hazardously Misleading Information (HMI). Experimental validation on a real-
world dataset shows that our algorithm produces less than 11 m position error
and the integrity risk over bounds the probability of HMI with 0.11 failure
rate for an 8 m Alert Limit in an urban scenario.
## 1 INTRODUCTION
In urban environments, GNSS signals suffer from lack of continuous satellite
signal availability, non line-of-sight (NLOS) errors and multi-path effects.
Thus, it is important to quantify the integrity or measure of trust in the
correctness of the positioning solution provided by the navigation system.
Traditional integrity monitoring approaches [3] provide point positioning
estimates i.e. the state estimation algorithm is assumed to be correct and
then the integrity of the estimated position is assessed. However, addressing
state estimation and integrity monitoring separately does not capture the
uncertainty in the state estimation algorithm. As a result, the integrity
monitoring becomes biased by the acquired state estimate leading to subsequent
faulty state estimation.
Recently, an approach towards joint state estimation and integrity monitoring
for GNSS measurements was proposed in Particle RAIM [1]. Instead of producing
point positioning estimates, Particle RAIM uses a particle filter to form a
multi-modal probability distribution over position, represented as particles.
Traditional RAIM [4] is used to assess the correctness of different ranging
measurements and the particle weights are updated to form the distribution
over the position. From the resulting probability distribution, the integrity
risk is derived using an approximate upper bound to the probability of HMI or
the reference risk. By incorporating the correctness of different measurement
subsets directly into the state estimation, Particle RAIM is able to exclude
multiple faults in GNSS ranging measurements. However, due to large errors
from GNSS measurements, Particle RAIM requires employing conservative measures
such as large Alert Limits to adequately bound the reference risk.
For urban applications, improved positioning accuracy from Particle RAIM is
necessary to provide adequate integrity for smaller Alert Limits. Since
measurements from GNSS are not sufficient to provide the desired accuracy, it
is helpful to augment GNSS with additional sensors that increase redundancy in
measurements. Sensors such as cameras are effective complimentary sensors to
GNSS. In urban regions, cameras have access to rich environmental features [5]
[6] [7] and provide superior sensing than GNSS which suffers from multi-path
and NLOS errors [3] [8] [9] [10].
Thus, with added vision, we need a framework to provide integrity for the
fused GNSS-camera navigation system to account for two categories of faults.
The first category includes data association errors across images, where
repetitive features are found in multiple images creating ambiguity during
feature and image association. This ambiguity is further amplified due to
variations in lighting and environmental conditions. The second category
comprises errors that arise during sensor fusion of GNSS and camera
measurements. Ensuring that faults in either measurement do not dominate the
sensor fusion process is paramount for maximizing the complimentary
characteristics of GNSS and camera.
Many works provide integrity for GNSS-camera fused systems utilizing a Kalman
Filter [11] framework or an information filter [12]. Vision Aided-RAIM [13]
introduced landmarks as pseudo-satellites and integrated them into a linear
measurement model alongside GPS observations. In [14], the authors implemented
a sequential integrity monitoring approach to isolate single satellite faults.
The integrity monitor uses the innovation sequence output from a single Kalman
filter to derive a recursive expression of the worst case failure mode slopes
and to compute the protection levels (PL) in real-time. An Information Filter
(IF) is used in [15] for data fusion wherein faults are detected based on the
Kullback-Leibler divergence (KL divergence) [2] between the predicted and the
updated distributions. After all detected faulty measurements are removed, the
errors are modeled by a student’s t distribution to compute a PL. A student’s
t distribution is also used in [16] alongside informational sensor fusion for
fault detection and exclusion. The degree of the distribution is adapted in
real-time based on the computed residual from the information filter. A
distributed information filter is proposed in [17] to detect faults in GPS
measurement by checking the consistency through log-likelihood ratio of the
information innovation of each satellite. These approaches model measurement
fault distributions with a Gaussian distribution although for camera
measurements, the true distribution might be non-linear, multi-modal, and
arbitrary in nature. Using a simplified linear measurement probability
distribution renders these frameworks infeasible and unreliable for safety-
critical vision augmented GNSS applications.
Another line of work builds on Simultaneous Localization and Mapping (SLAM)
based factor graph optimization techniques. Bhamidipati et al [5] derived PL
by modeling GPS satellites as global landmarks and introducing image pixels
from a fish-eye camera as additional landmarks. The raw image is categorized
into sky and non-sky pixels to further distinguish between LOS and NLOS
satellites. The overall state is estimated using graph optimization along with
an M-estimator. Although this framework is able to exclude multiple faults in
GPS measurements, it is not extendable to measurements from forward or rear
facing cameras that do not capture sky regions. Along similar lines,
measurements from a stereo camera along with GNSS pseudoranges are jointly
optimized in a graph optimization framework in [18]. GNSS satellites are
considered as feature vision points and pose-graph SLAM is applied to achieve
a positioning solution. However, graph optimization approaches also share the
same limitation as Kalman Filter based approaches: They produce point
positioning estimates and do not account for the uncertainty in state
estimation that biases integrity monitoring.
Overall, existing integrity monitoring algorithms for GNSS- camera fusion have
the following limitations:
* 1
They address state estimation and integrity monitoring separately, similar to
traditional RAIM approaches.
* 2
They accommodate camera measurements within a linear or linearizable framework
such as KF, EKF, or IF and become infeasible when camera measurements are not
linearizable without loss of generality.
* 3
There is no standard way in literature to quantify the uncertainty in camera
measurements directly from raw images.
* 4
They use outlier rejection techniques to perform fault detection and exclusion
after obtaining the positioning solution. There is no framework that accounts
for faults both independently in GNSS and camera as well as the faults that
arise during sensor fusion.
In our work, we overcome the above limitations by proposing the following
contributions. This paper is based on our recent ION GNSS+ 2020 conference
paper [19].
* 1
We jointly address state estimation and integrity monitoring for GNSS-camera
fusion with a particle filtering framework. We retain the advantages of
Particle RAIM while extending it to include camera measurements.
* 2
We derive a probability distribution over position directly from images
leveraging image registration.
* 3
We develop a metric based on KL divergence [2] to fuse probability
distributions obtained from GNSS and camera measurements. By minimizing the KL
divergence of the distribution from each camera measurement with respect to
the GNSS measurement distribution, we ensure that erroneous camera
measurements do not affect the overall probability distribution. Stated
otherwise, the divergence metric augments the shared belief over the position
from both sensor measurements by minimizing cross-contamination during sensor
fusion.
* 4
We experimentally validate our framework on an urban environment dataset [20]
with faults in GNSS and camera measurements.
The rest of the paper is organized as follows. In Section 2, we describe the
overall particle filter framework for probabilistic sensor fusion. In Sections
3 and 4, we infer a distribution over position from GNSS and camera
measurements, respectively. Section 5 elaborates on the probabilistic sensor
fusion of GNSS and camera measurements along with the proposed KL divergence
metric. In Section 6, we describe the integrity risk bounding. Sections 7 and
8 shows the experimental setup and the results from experimental validation on
the urban environment dataset, respectively. In Section 9, we conclude our
work.
## 2 PARTICLE FILTER FRAMEWORK FOR PROBABILISTIC SENSOR FUSION
The distribution over the position inferred from GNSS and camera measurements
is multi-modal due to faults in a subset of measurements. Thus, to model such
distributions, we choose a particle filtering approach that further allows us
to keep track of multiple position hypotheses rather than a single position
estimate. Although a particle filtering approach was used in Particle RAIM
[1], the authors only considered GNSS ranging measurements. In our work, we
extend the framework to include measurements from a camera sensor. Figure 1
represents our overall framework. We add the camera and probabilistic sensor
fusion modules to the framework proposed in [1].
Figure 1: Particle filter framework with probabilistic sensor fusion of GNSS
and camera measurements and integrity risk bounding. The highlighted modules
represent our contributions.The GNSS and Risk Bounding Modules are adopted
from Particle RAIM [1].
Our framework consists of the following modules:
* •
Perturbation and propagation: Using noisy inertial odometry from the IMU, we
generate a set of motion samples, each of which perturbs the previous particle
distribution in the propagation step.
* •
GNSS module: This module from Particle RAIM [1] takes GNSS ranging
measurements from multiple satellites, some of which may be faulty and outputs
a probability distribution over position using a fault-tolerant weighting
scheme described in Section 3. The particles from the GNSS module are
propagated to the camera module to ensure that the distributions from GNSS and
camera share the same domain of candidate positions.
* •
Camera module and synchronization with motion data: The camera module takes a
camera image and matches it to the images in a map database using image
registration to generate similarity scores. The underlying state of the best
matched image is extracted and propagated forward to the current GNSS time
stamp by interpolating with IMU odometry. This step ensures that the
probability distributions from camera and GNSS measurements are generated at
the same time stamps. Finally, we use a categorical distribution function to
transform the similarity scores into a probability distribution over position
hypotheses as described in Section 4.
* •
Probabilistic sensor fusion: This module outputs a joint likelihood over
positions from GNSS and camera measurements after fusing them with the
proposed KL divergence metric in Section 5.1. Particles are resampled from the
current distribution with Sequential Importance Resampling [21].
* •
Risk bounding: We adopt the risk bounding formulation proposed in [1] to
compute the integrity risk from the derived probability distribution over the
position domain. Using generalization bounds from statistical learning theory
[22], the derived risk bound is formally shown to over bound the reference
risk in Section 6.
We elaborate on the various modules of our framework in the following
sections.
## 3 GNSS MODULE- PARTICLE RAIM
A likelihood model for the GNSS measurements is derived using the mixture
weighting method proposed in Particle RAIM [1]. Instead of assuming
correctness of all GNSS measurements, the likelihood is modeled as a mixture
of Gaussians to account for faults in some measurements. Individual
measurement likelihoods are modeled as Gaussians with the expected
pseudoranges as means and variance based on Dilution of Precision(DoP). The
GMM [23] [24] is expressed as:
$L_{t}(m^{t})=\sum_{k=0}^{R}\gamma_{k}\mathcal{N}(m_{k}^{t}|\mu_{X}^{t,k},\sigma_{X}^{t,k});\sum_{k=0}^{R}\gamma_{k}=1,$
(1)
where $L_{t}(m^{t})$ denotes the likelihood of measurement $m$ at time $t$.
$\gamma$ denotes the measurement responsibility or the weights of the
individual measurement components and $R$ refers to the total number of GNSS
ranging measurements. $\mu$ and $\sigma$ represent the mean and the standard
deviation of each Gaussian component inferred from DOP. $X$ refers to the
collection of position hypotheses denoted by particles and $k$ is the index of
the number of Gaussians in the mixture. The weights are inferred with a single
step of the Expectation-Maximization (EM) scheme [25] as shown in Figure 2.
Figure 2: Two steps of the EM scheme used to derive the weight of each
Gaussian likelihood in the GMM. In the expectation step, the local vote for
each particle is computed based on the squared-normal voting on the normalized
residual for a particle obtained with traditional RAIM. The overall confidence
is inferred by normalizing the votes and pooling them using Bayesian maximum a
posteriori (MAP) estimation.
To avoid numerical errors due to finite precision, the log likelihood of the
likelihood model is implemented by extending the input space to include
additional copies of the state space variable, one for each GNSS measurement
[26]. The new likelihood is written as:
$P\left(m^{t}\middle|X^{t},\chi=k\right)=\gamma_{k}\mathcal{N}\left(m_{k}^{t}\middle|\mu_{x}^{t,k},\sigma_{x}^{t,k}\right)\
;\sum_{k=1}^{R}\gamma_{k}=1,$ (2)
where $\chi$ is an index that denotes the associated GNSS measurement with the
particle replica.
## 4 CAMERA MODULE
To quantify the uncertainty from camera images, we use a map-matching
algorithm that matches a camera image directly to an image present in a map
database. Our method is implemented in OpenCV [27] and comprises three steps
shown in Figure 3.
Figure 3: Generating probability distribution over position from camera
images.
Each block is elaborated below.
* •
Database Creation: We assume prior knowledge of the geographical region where
we are navigating. Based on GPS coordinates, we select images from the known
area using Google Street View Imagery. These images along with their
associated coordinates form the database. Features are extracted from these
images and stored in a key point-descriptor format.
* •
Image Registration: After receiving a camera test image, we extract features
and descriptors with the ORB [28] algorithm. Although we experimented with
other feature extraction methods such as SIFT [29], SURF [30], and AKAZE [31],
ORB was found most effective for extracting descriptors from highly blurred
images. The descriptor vectors are clustered with a k-means algorithm [32] to
form a vocabulary tree [33]. Each node in the tree corresponds to an inverted
file, i.e., a file containing the ID-numbers of images in which a particular
node is found and the relevance of each feature to that image. The database is
then scored hierarchically based on Term Frequency Inverse Document Frequency
(TF-IDF) scoring [33], which quantifies the relevance of the images in the
database to the camera image. We refer to these scores as the similarity
scores. The image with the highest score is chosen as the best match and the
underlying state is extracted.
* •
Probability generation after synchronization: After extracting the state from
the best camera image in the database, we propagate the state to the same time
stamp as the GNSS measurement. The raw vehicle odometry is first synchronized
with GNSS measurements using the algorithm in [20]. Using the time difference
between the previous and current GNSS measurements, we linearly interpolate
the extracted state with IMU motion data as shown below.
$x^{t}=x^{t-1}+v^{t-1}dt+0.5a^{t-1}\ dt^{2}$ (3)
where $x^{t}$ refers to the 3D position at epoch $t$, $dt$ refers to the time
difference between successive camera measurements, and $v$ and $a$ are the
interpolated IMU velocity and accelerations at epoch $t$.
Next, we compute the Euclidean distance between the interpolated state and the
current particle distribution from GNSS measurements to obtain new similarity
scores. This step ensures that the probability distributions computed from
camera and GNSS measurements share the same domain of candidate positions. A
SoftMax function takes the scores and outputs a probability distribution over
position. Normalization of the scores enforces a unit integral for the
distribution.
$Q(n^{t}|X^{t})=\frac{\exp(\omega^{t})}{\sum_{c}\exp(\omega_{c}^{t})}$ (4)
where $Q$ is the probability distribution associated with camera measurement
$n$ at time $t$ over the position domain $X$, $\omega_{c}^{t}$ represents
computed distance score, and $c$ is the index for individual particles.
## 5 PROBABILISTIC SENSOR FUSION
After obtaining the probability distributions from GNSS and camera, we need to
form a joint distribution over the position. However, we need to ensure that
faults in camera measurements do not degrade the distribution from GNSS
measurements, one that is coarse but correct since the distribution accounts
for faults in the ranging measurements through the RAIM voting scheme. Thus,
we need a metric to identify and exclude faulty camera measurements leveraging
knowledge of the distribution from GNSS. Additionally, the metric should
assess the consistency of the probability distribution from each camera
measurement with respect to the GNSS distribution and mitigate inconsistent
distributions that result from vision faults. The KL divergence [34]
represents one way to assess the consistency of two probability distributions.
By minimizing the divergence between the distributions inferred from camera
and GNSS, we ensure that both distributions are consistent.
### 5.1 Kl Divergence: Metric Formulation
We provide a background on KL divergence prior to explaining our metric.
The KL divergence [34] between two discrete probability distributions, $p$ and
$q$, in the same domain is defined as:
$D_{KL}(p||q)=\sum\nolimits_{z\in\zeta}p_{z}\ log\ \frac{p_{z}}{q_{z}}$ (5)
where $\zeta$ represents the domain of both distributions and $z$ is each
element of the domain. In our work, we ensure that distributions from GNSS and
camera share the same position domain by propagating the particles from the
GNSS distribution to the camera module prior to generating the distribution
from camera measurements. Two important properties of the KL divergence are:
* •
The KL divergence between two distributions is always non-negative and not
symmetrical [34]
$D_{KL}(p||q)\neq D_{KL}(q||p)$ (6)
where $D_{KL}(q||p)$ is the reverse KL divergence between the distributions
$p$ and $q$.
* •
$D_{KL}(p||q)$ is convex in the pair $(p||q)$ if both distributions represent
probability mass functions (pmf) [34].
Leveraging the above properties, we formulate our metric below.
* •
Mixture of Experts (MoE): We form a mixture distribution to represent
probability distributions from successive camera measurements, where a non-
Gaussian probability distribution is derived from a single camera image. Each
measurement is assigned a weight to represent its contribution in the mixture.
Instead of setting arbitrary weights, we leverage the GNSS distribution to
infer weights that directly correspond to whether a camera measurement is
correct or faulty. Thus, highly faulty camera measurements are automatically
assigned low weights in the MoE. The mixture distribution is given as:
$Q^{*}(n^{t}|X^{t})=\sum\limits_{j=1}^{K}\alpha_{j}^{*}\
Q^{j}(n_{j}^{t}|X^{t});\sum\limits_{j=1}^{K}\alpha_{j}^{*}=1$ (7)
where $Q^{*}(n^{t}|X^{t})$ represents the mixture distribution formed using
$K$ camera images between two successive GNSS time epochs.
$Q^{j}(n_{j}^{t}|X^{t})$ is the likelihood of a single camera image
$n_{j}^{t}$ recorded at time $t$ with $\alpha_{j}^{*}$ as the normalized
weight. $X^{t}$ are the particles representing position hypothesis and $j$ is
the index for the camera images. The weights are normalized below to ensure
that the MoE forms a valid probability distribution:
$\alpha_{j}^{*}=\frac{\alpha_{j}}{\sum\limits_{r=1}^{K}\alpha_{r}}$ (8)
where $\alpha_{j}^{*}$ is the normalized weight, $\alpha_{j}$ is the weight
prior to normalization, $r$ is the index for the number of camera images
between two successive GNSS time epochs, and $K$ is the total number of camera
measurements.
* •
Setup KL divergence: We set up a divergence minimization metric between the
distributions from each camera measurement and all GNSS measurements.
${KL}_{j}\ ((\alpha_{j}\ Q^{j}\left(n_{j}^{t}\middle|X^{t}\right)\ ||\ P\
\left(m_{k}^{t}\middle|X^{t},\chi=k\right))=\sum_{i=1}^{S}{\left(\alpha_{j}\
Q^{j}(n_{j}^{t}|\ X^{t})\right)\ log\left[\frac{\left(\alpha_{j}\
Q^{j}(n_{j}^{t}|\ X^{t})\right)}{P\
\left(m_{k}^{t}\middle|X^{t},\chi=k\right)}\right]}$ (9)
where $||\ $ denotes the divergence between both probability distributions,
$S$ represents the total number of particles or position hypotheses across
both distributions, and $i$ is the index for the particles. $\ P\
\left(m_{k}^{t}\middle|X^{t},\chi=k\ \right)$ is the probability distribution
at epoch $t$ from GNSS measurements as defined in Equation (2), $\alpha_{j}\ $
is the unnormalized weight, and $j$ is the index for the camera measurement.
* •
Minimize divergence: Using the convexity of the KL divergence (Property 2), we
minimize each divergence metric with respect to the unknown weight assigned to
the likelihood of each camera measurement. We abbreviate $\
P\left(m_{i}^{t}\middle|X^{t},\chi=i\ \right)$ as $P(x_{i})$ and
$Q\left(n_{j}^{t}\middle|X^{t}\right)\ $ as $Q(x_{i})$ for brevity and expand
Equation (9). Since $\alpha_{j}$ is independent of the summation index, we
keep it outside the summation and simplify our expansion below.
${KL}_{j}(Q|\left|P\right)=\ \alpha_{j}\sum_{i\ =\
1}^{S}Q\left(x_{i}\right)log\ \alpha_{j}\ +\ \alpha_{j}\sum_{i\ =\
1}^{S}Q\left(x_{i}\right)log\ Q\left(x_{i}\right)\ -\ \alpha_{j}\sum_{i\ =\
1}^{S}Q\left(x_{i}\right)log\ P\ \left(x_{i}\right)$ (10)
Taking the first derivative with respect to $\alpha_{j}$ we obtain,
${min}_{\alpha_{j}}{KL}_{j}\ \ (Q|\left|P\right)=\ log\ \alpha_{j}\sum_{i\ =\
1}^{S}Q\left(x_{i}\right)\ +\ \sum_{i\ =\ 1}^{S}Q\left(x_{i}\right)\ +\sum_{i\
=\ 1}^{S}Q\left(x_{i}\right)log\ Q\left(x_{i}\right)\ -\ \sum_{i\ =\
1}^{S}Q\left(x_{i}\right)log\ P\left(x_{i}\right)$ (11)
Equating the expression on the right to 0 and solving for $\alpha_{j}$ gives
us:
$\alpha_{j}=e^{k}\ ;\ k=\frac{\sum_{i=1}^{S}Q\left(x_{i}\right)\ log\
\frac{P\left(x_{i}\right)}{Q\left(x_{i}\right)}}{\sum_{i=1}^{S}Q\left(x_{i}\right)}-1$
(12)
We also perform a second derivative test to ensure that the $\alpha_{j}$ value
inferred is a minimum value of the divergence measure. Since the exponential
function with the natural base is always positive, $\alpha_{j}$ is always
positive as well. Thus, evaluating the second derivative gives us a positive
value.
$\frac{1}{\alpha_{j}}\sum_{i=1}^{S}Q(x_{i})>0$ (13)
* •
Joint probability distribution over position: After obtaining the weights, we
normalize them using Equation (8). We obtain the joint distribution assuming
that the mixture distribution from camera measurements and the GMM from GNSS
measurements are mutually independent. The joint distribution is given as:
$P^{\ast}\left(n^{t},\ m^{t}\middle|X^{t}\right)=\
P\left(m_{i}^{t}\middle|X^{t},\chi=k\ \right)\
Q^{\ast}\left(n^{t}\middle|X^{t}\right)$ (14)
where $\ P\ \left(m_{k}^{t}\middle|X^{t},\chi=k\ \right)$ is the probability
distribution from GNSS measurements in Equation (2). We take the log
likelihood of the joint distribution to avoid finite precision errors.
## 6 INTEGRITY RISK BOUNDING
We upper bound the probability of HMI using the risk bounding framework
introduced in [1]. For a single epoch, the probability of HMI for a given
Alert Limit $r$ is defined as:
$R_{x*}(\pi)=\mathop{\mathbb{E}}_{x\sim\pi}[P(\|x-x^{*}\|\geq r)]$ (15)
where $R_{x*}(\pi)$ is the probability of HMI with reference position $x^{*}$
and mean distribution in position space induced by all posterior distributions
$\pi$. The distributions are created by generating samples around the measured
odometry and then perturbing the initial particle distribution. From the PAC-
Bayesian [35] formulation and as shown in [1], the reference risk
$\mathop{\mathbb{\textbf{R}}(\pi^{t})}$ upper bound is:
$\mathop{\mathbb{\textbf{R}}(\pi^{t})}\leq\mathop{\mathbb{\textbf{R}}_{M}(\pi^{t})}+\mathcal{D}_{Ber}^{-1}(\mathop{\mathbb{\textbf{R}}_{M}(\pi^{t})},\epsilon)$
(16)
The first and second terms refer to empirical and divergence risk,
respectively. We explain the computation of each term below.
The empirical risk $\mathop{\mathbb{\textbf{R}}_{M}(\pi^{t})}$ is computed
from a finite set of perturbed samples of size $M$.
$\mathop{\mathbb{\textbf{R}}_{M}(\pi^{t})}=\frac{1}{M}\sum_{i=1}^{M}\mathop{\mathbb{E}}_{x\sim\pi^{t}}[l(x,\pi_{u}^{t})],$
(17)
where, $l(x,\pi_{u}^{t})$ is the classification loss with respect to a motion
sample resulting in the posterior distribution being classified as hazardous.
$\pi$ refers to the mean posterior distribution at time $t$.
The divergence risk term
$\mathcal{D}_{Ber}^{-1}(\mathop{\mathbb{\textbf{R}}_{M}(\pi^{t})},\epsilon)$
accounts for uncertainty due to perturbations that are not sampled. First, we
compute the gap term $\epsilon$ using KL divergence [2] of the current
distribution from the prior and a confidence requirement in the bound
$\delta$.
$\epsilon=\frac{1}{M}(KL(\pi^{t}||\pi^{t-1})+log(\frac{M+1}{\delta}))$ (18)
where $\delta$ refers to the bound gap. The means of the prior and current
distributions are taken as $\pi^{t-1}$ and $\pi^{t}$. The prior and current
distributions are approximated as multivariate Gaussian distributions.
The Inverse Bernoulli Divergence [1] $\mathcal{D}_{Ber}^{-1}$ is defined as:
$\mathcal{D}_{Ber}^{-1}(q,\epsilon)=t\;\;s.t.\;\;\mathcal{D}_{Ber}(q||q+t)=\epsilon$
(19)
where $q||q+t$ is the KL divergence [2] between $q$ and $q+t$ and $q$ is given
by the empirical risk term. Finally, the Inverse Bernoulli Divergence [1] is
obtained approximately as:
$\mathcal{D}_{Ber}(q,\epsilon)=\sqrt{\frac{2\epsilon}{\frac{1}{q}+\frac{1}{1-q}}}$
(20)
## 7 EXPERIMENTS
### 7.1 Datasets
We test our framework on a 2.3 km long urban driving dataset from Frankfurt
[20]. We use GNSS pseudorange measurements, images from a forward-facing
camera, ground truth from a NovAtel receiver, and odometry from the IMU. The
dataset contains NLOS errors in GNSS measurements and vision faults due to
variations in illumination. In addition to the real-world dataset, we create
emulated datasets by inducing faults in GNSS and vision measurements with
various controlled parameters.
### 7.2 Experimental Setup and Parameters
* •
Real-world dataset: We use GNSS ranging measurements with NLOS errors. For
simplicity, we estimate the shared clock bias by subtracting the average
residuals with respect to ground truth from all GNSS pseudoranges at one time
epoch.
* •
Emulated dataset: First, we vary the number of satellites with NLOS errors by
adding back the residuals to randomly selected satellites. This induces clock
errors in some measurements which are perceived as faults. Secondly, we remove
the NLOS errors from all measurements but add Gaussian bias noise to
pseudorange measurements from random satellites at random time instances. The
number of faults are varied between 2-9 out of 12 available measurements at
any given time step. We induce faults in camera measurements by adding
blurring with a 21x21 Gaussian kernel and occlusions of 25-50 % height and
width to random images.
During the experimental simulation, a particle filter tracks the 3D position
(x,y,z) of the car and uses faulty GNSS and camera measurements along with
noisy odometry. Probability distributions are generated independently from
GNSS and camera and fused with the KL divergence metric to form the joint
distribution over positions. At each time epoch, the particle distribution
with the highest total log-likelihood is chosen as the estimated distribution
for that epoch. The integrity risk is computed from 10 posterior distributions
of the initial particle distribution and the reference risk is computed with
ground truth. Our experimental parameters are listed in Table 1.
Table 1: Experimental Parameters for Validation with Real-world and Emulated Datasets Parameter | Value | Parameter | Value
---|---|---|---
No. of GNSS measurements | 12 | Added Gaussian bias to GNSS measurements | 20- 200 m
No. of faults in GNSS measurements | 2-9 | No. of particles | 120
Measurement noise variance | 10 m 2 | Filter propagation variance | 3 m 2
Alert Limit | 8, 16 m | No. of odometry perturbations | 10
### 7.3 Baselines and Metrics
We use Particle RAIM as the baseline to evaluate our algorithm’s performance
for state estimation. The metric for state estimation is the root mean square
error (RMSE) of the estimated position with respect to ground truth for the
entire trajectory. The risk bounding performance is evaluated with metrics
derived from a failure event, i.e., when the derived risk bound fails to upper
bound the reference risk. The metrics are the following: failure ratio(the
fraction of cases where the derived risk bound fails to upper bound the
reference risk), failure error(the mean error during all failure events), and
the bound gap(average gap between the derived integrity risk) and the
reference risk.
For evaluating the integrity risk, we specify a performance requirement that
the position should lie within the Alert Limit with at least 90% probability.
A fault occurs if the positioning error exceeds the Alert Limit. The metrics
for integrity risk are reported based on when the system has insufficient
integrity or sufficient integrity [36], which respectively refer to the states
when a fault is declared or not. The false alarm rate equals the fraction of
the number of times the system declares insufficient integrity in the absence
of a fault. The missed identification rate is defined as the fraction of the
number of times the system declares sufficient integrity even though a fault
is present.
## 8 RESULTS
### 8.1 State Estimation
First, we test our algorithm with NLOS errors in GNSS ranging measurements and
added camera faults. Quantitative results in Table 2 demonstrate that our
algorithm produces 3D positioning estimates with overall RMSE of less than 11
m. Additionally, our algorithm reports lower errors compared to Particle RAIM
for all test cases. Our algorithm is able to compensate for the residual
errors from Particle RAIM by including camera measurements in the framework.
This leads to improved accuracy in the positioning solution.
Table 2: RMSE in 3D Position with NLOS errors and added vision faults No. of faults out of 12 available GNSS measurements | Particle RAIM-Baseline (meter) | Our Algorithm (meter)
---|---|---
2 | 18.1 | 6.3
4 | 19.1 | 6.1
6 | 16.9 | 5.9
9 | 26.6 | 10.6
For qualitative comparison, we overlay the trajectories from our algorithm on
ground truth and highlight regions with positioning error of greater than 10 m
in Figures 4 and 5. Trajectories from Particle RAIM show large deviations from
ground truth in certain regions, either due to poor satellite signal
availability or high NLOS errors in the faulty pseudorange measurements.
However, similar deviations are absent from the trajectories from our
algorithm which uses both GNSS and camera measurements. Our KL divergence
metric is able to mitigate the errors from vision and the errors from cross-
contamination during sensor fusion, allowing us to produce lower positioning
error.
(a) Particle RAIM (Baseline)
(b) Our Algorithm
Figure 4: State estimation under NLOS errors for 6 faulty GNSS pseudo range
measurements and added vision faults. Regions with positioning error greater
than 10 m are highlighted in red.
(a) Particle RAIM (Baseline)
(b) Our Algorithm
Figure 5: State estimation under NLOS errors for 9 faulty GNSS pseudo range
measurements and added vision faults. Regions with positioning error greater
than 10 m are highlighted in red.
Secondly, we test our algorithm with the emulated datasets. Quantitatively, we
plot the RMSE as a function of the added Gaussian bias value in Figure 6 and
as a function of the number of faulty GNSS ranging measurements in Figure 7.
For all validation cases, our algorithm produces an overall RMSE less than 10
m. Similar to the results from the real-world dataset, our algorithm reports
lower RMSE values than Particle RAIM. With a fixed number of faults, the
errors generally increase with increasing bias. At a fixed bias value, the
errors decrease with increasing number of faults up to 6 faulty GNSS
measurements since large number of faults are easily excluded by Particle RAIM
producing an improved distribution over the position. The improved
distribution from GNSS further enables the KL divergence metric to exclude
faulty camera measurements and produce a tighter distribution over the
position domain. However, with a higher number of faults, Particle RAIM does
not have enough redundant correct GNSS measurements to exclude the faulty
measurements resulting in higher positioning error. Nevertheless, with added
vision, our algorithm produces better positioning estimates for all test cases
than Particle RAIM.
Figure 6: RMSE from our algorithm and Particle RAIM (baseline) for varying
numbers of faults in GNSS ranging measurements at a fixed added Gaussian bias
value.
Figure 7: RMSE from our algorithm and Particle RAIM (baseline) for various
added Gaussian bias values with fixed number of faulty GNSS measurements.
### 8.2 Integrity Monitoring
We evaluate the integrity risk bounding performance for two Alert Limits, 8 m
and 16 m. For an Alert Limit of 8 m, Table 3 shows that the derived integrity
risk satisfies the performance requirement with very low false alarm and
missed identification rates. While the false alarm rates reported are 0 for
all test cases except two and the missed identification rates are always less
than 0.11. Additionally, the integrity risk bound upper bounds the reference
risk with a failure ratio of less than 0.11 and a bound gap of less than 0.4
for all cases. Figures 8 and 9 further support the observation that the
derived risk bound is able to over bound the reference risk with low failure
rate for the same Alert Limit. The few instances when the derived risk bound
fails to upper bound the reference risk occur due to large sudden jumps in the
reference risk that go undetected considering the fixed size of our motion
samples. However, in general, the integrity risk produced from our algorithm
is able to satisfy the desired performance requirement and successfully
overbound the reference risk for an Alert Limit as small as 8 m. This choice
of Alert Limit is allowed because of the low positioning errors that further
enable non-conservative integrity risk bounds.
Table 3: Integrity Risk for Alert Limit of 8 m Added Bias Value (meter) | No. of Faults | $P_{FA}$ | $P_{MI}$ | Failure Ratio | Failure Error (meter) | Bound Gap
---|---|---|---|---|---|---
100 | 2 | 0 | 0.03 | 0.07 | 7.5 | 0.26
100 | 4 | 0 | 0.04 | 0.04 | 2.3 | 0.25
100 | 6 | 0 | 0.07 | 0.11 | 2.9 | 0.25
100 | 9 | 0.07 | 0.03 | 0.07 | 4.7 | 0.36
200 | 2 | 0 | 0.07 | 0.07 | 3.5 | 0.20
200 | 4 | 0.11 | 0 | 0.04 | 4.8 | 0.40
200 | 6 | 0 | 0 | 0 | - | 0.38
200 | 9 | 0 | 0.07 | 0.04 | 5.4 | 0.36
Figure 8: Reference risk and integrity risk bound with 8 m Alert Limit for
varying numbers of faults and added bias of 100 m in GNSS measurements. The
derived risk bound over bounds the reference risk with less than 0.11 failure
ratio for all test cases.
Figure 9: Reference risk and integrity risk bound with 8 m Alert Limit for
varying numbers of faults and added bias of 200 m in GNSS measurements. The
derived risk bound over bounds the reference risk with less than 0.07 failure
ratio for all test cases.
For an Alert Limit of 16 m, Table 4 shows that the integrity risk satisfies
the integrity performance requirement with 0 false alarm rates. Furthermore,
the missed identification rates are always 0 except for the test case with 9
faults and 100 m added bias. Specifying a larger Alert Limit lowers the risk
associated with the distribution over position since almost all particles from
the perturbed distributions lie within the Alert Limit. Thus, the integrity
risk with a 16 m Alert Limit is reported to be much smaller compared to the
risk obtained with a 8 m Alert Limit as shown in Figures 8 and 9.
Additionally, the derived risk bound produces even lower failure ratio of less
than 0.07 and a tighter bound gap of less than 0.1. Overall, the derived risk
bound over bounds the reference risk for various bias and fault scenarios in
Figures 10 and 11.
Table 4: Integrity Risk for Alert Limit of 16 m Added Bias Value (meter) | No. of Faults | $P_{FA}$ | $P_{MI}$ | Failure Ratio | Failure Error (meter) | Bound Gap
---|---|---|---|---|---|---
100 | 2 | 0 | 0 | 0 | - | 0.10
100 | 4 | 0 | 0 | 0 | - | 0.08
100 | 6 | 0 | 0 | 0.04 | 5.9 | 0.05
100 | 9 | 0 | 0.04 | 0.07 | 9.7 | 0.08
200 | 2 | 0 | 0 | 0.07 | 5.0 | 0.09
200 | 4 | 0 | 0 | 0.07 | 4.2 | 0.07
200 | 6 | 0 | 0 | 0 | 3.6 | 0.06
200 | 9 | 0 | 0 | 0.04 | 3.8 | 0.01
Figure 10: Reference risk and integrity risk bound with 16 m Alert Limit for
varying numbers of faults and added bias of 100 m in GNSS measurements. The
derived risk bound over bounds the reference risk with less than 0.07 failure
ratio for all test cases.
Figure 11: Reference risk and integrity risk bound with 16 m Alert Limit for
varying numbers of faults and added bias of 200 m GNSS measurements. The
derived risk bound over bounds the reference risk with less than 0.07 failure
ratio for all test cases.
## 9 CONCLUSION
In this paper, we presented a framework for joint state estimation and
integrity monitoring for a GNSS-camera fused system using a particle filtering
approach. To quantify the uncertainty in camera measurements, we derived a
probability distribution directly from camera images leveraging a data-driven
approach along with image registration. Furthermore, we designed a metric
based on KL divergence to probabilistically fuse measurements from GNSS and
camera in a fault-tolerant manner. The metric accounts for vision faults and
mitigates the errors that arise due to cross-contamination of measurements
during sensor fusion. We experimentally validated our framework on real-world
data under NLOS errors, added Gaussian bias noise to GNSS measurements, and
added vision faults. Our algorithm reported lower positioning error compared
to Particle RAIM which uses only GNSS measurements. The integrity risk from
our algorithm satisfied the integrity performance requirement for Alert Limits
of 8 m and 16 m with low false alarm and missed identification rates.
Additionally, the derived integrity risk successfully provided an upper bound
to the reference risk with a low failure rate for both Alert Limits, making
our algorithm suitable for practical applications in urban environments.
## 10 ACKNOWLEDGMENT
We express our gratitude to Akshay Shetty, Tara Mina and other members of the
Navigation of Autonomous Vehicles Lab for their feedback on early drafts of
the paper.
## References
* [1] S. Gupta and G. X. Gao, “Particle raim for integrity monitoring,” Proceedings of the 32nd International Technical Meeting of the Satellite Division of The Institute of Navigation, ION GNSS+ 2019, 2019.
* [2] H. Zhu, “On information and sufficiency,” 04 1997.
* [3] N. Zhu, J. Marais, D. Bétaille, and M. Berbineau, “Gnss position integrity in urban environments: A review of literature,” IEEE Transactions on Intelligent Transportation Systems, vol. 19, no. 9, pp. 2762–2778, 2018.
* [4] Y. C. Lee, “Analysis of range and position comparison methods as a means to provide gps integrity in the user receiver,” Proceedings of the 32nd International Technical Meeting of the Satellite Division of The Institute of Navigation), vol. 19, no. 9, pp. 1–4, 1986.
* [5] S. Bhamidipati and G. Gao, “Slam-based integrity monitoring using gps and fish-eye camera,” Proceedings of the 32nd International Technical Meeting of the Satellite Division of The Institute of Navigation, ION GNSS+ 2019, pp. 4116–4129, 10 2019.
* [6] Z. Wang, Y. Wu, and Q. Niu, “Multi-sensor fusion in automated driving: A survey,” IEEE Access, vol. 8, pp. 2847–2868, 2020.
* [7] J. Rife, “Collaborative vision-integrated pseudorange error removal: Team-estimated differential gnss corrections with no stationary reference receiver,” IEEE Transactions on Intelligent Transportation Systems, vol. 13, no. 1, pp. 15–24, 2012.
* [8] He Chengyan, Guo Ji, Lu Xiaochun, and Lu Jun, “Multipath performance analysis of gnss navigation signals,” pp. 379–382, 2014.
* [9] S. M. Steven Miller, X. Zhang, and A. Spanias, Multipath Effects in GPS Receivers: A Primer. 2015\.
* [10] K. Ali, X. Chen, F. Dovis, D. De Castro, and A. J. Fernández, “Gnss signal multipath error characterization in urban environments using lidar data aiding,” pp. 1–5, 2012.
* [11] R. E. Kalman, “A new approach to linear filtering and prediction problems,” Transactions of the ASME–Journal of Basic Engineering, vol. 82, no. Series D, pp. 35–45, 1960.
* [12] X. Wang, N. Cui, and J. Guo, “Information filtering and its application to relative navigation,” Aircraft Engineering and Aerospace Technology, vol. 81, pp. 439–444, 09 2009.
* [13] L. Fu, J. Zhang, R. Li, X. Cao, and J. Wang, “Vision-aided raim: A new method for gps integrity monitoring in approach and landing phase,” Sensors (Basel, Switzerland), vol. 15, pp. 22854–73, 09 2015.
* [14] C. Tanil, S. Khanafseh, M. Joerger, and B. Pervan, “Sequential integrity monitoring for kalman filter innovations-based detectors,” Proceedings of the 32nd International Technical Meeting of the Satellite Division of The Institute of Navigation, ION GNSS+ 2018, 10 2018.
* [15] J. Al Hage and M. E. El Najjar, “Improved outdoor localization based on weighted kullback-leibler divergence for measurements diagnosis,” IEEE Intelligent Transportation Systems Magazine, pp. 1–1, 2018.
* [16] J. Al Hage, P. Xu, and P. Bonnifait, “Bounding localization errors with student’s distributions for road vehicles,” International Technical Symposium on Navigation and timing, 11 2018.
* [17] N. A. Tmazirte, M. E. E. Najjar, C. Smaili, and D. Pomorski, “Multi-sensor data fusion based on information theory. application to gnss positionning and integrity monitoring,” 15th International Conference on Information Fusion, pp. 743–749, 2012.
* [18] Z. Gong, P. Liu, Q. Liu, R. Miao, and R. Ying, “Tightly coupled gnss with stereo camera navigation using graph optimization,” Proceedings of the 32nd International Technical Meeting of the Satellite Division of The Institute of Navigation, ION GNSS+ 2018, pp. 3070–3077, 10 2018.
* [19] A. Mohanty, S. Gupta, and G. X.Gao, “A particle filtering framework for integrity risk of gnss-camera sensor fusion,” Proceedings of the 33 nd International Technical Meeting of the Satellite Division of The Institute of Navigatin, ION GNSS+ 2020, 2020.
* [20] P. Reisdorf, T. Pfeifer, J. Breßler, S. Bauer, P. Weissig, S. Lange, G. Wanielik, and P. Protzel, “The problem of comparable gnss results – an approach for a uniform dataset with low-cost and reference data,” in The Fifth International Conference on Advances in Vehicular Systems, Technologies and Applications (M. Ullmann and K. El-Khatib, eds.), vol. 5, p. 8, nov 2016. ISSN: 2327-2058.
* [21] F. Gustafsson, F. Gunnarsson, N. Bergman, U. Forssell, J. Jansson, R. Karlsson, and P. . Nordlund, “Particle filters for positioning, navigation, and tracking,” IEEE Transactions on Signal Processing, vol. 50, no. 2, pp. 425–437, 2002.
* [22] S. B. O. Bousquet and G. Lugosi, “Introduction to statistical learning theory,” Advanced Lectures on Machine Learning, vol. 3176, pp. 169–207, 2004.
* [23] M. Simandl and J. Dunik, “Design of derivative-free smoothers and predictors,” 14th IFAC Symposium on System Identification, Newcastle, Australia, 03 2006.
* [24] H. W. Sorenson and D. L. Alspach, “Recursive bayesian estimation using gaussian sums,” Automatica, 1971.
* [25] J. P. Vila and P. Schniter, “Expectation-maximization gaussian-mixture approximate message passing,” IEEE Transactions on Signal Processing, vol. 61, no. 19, pp. 4658–4672, 2013.
* [26] C. M. Bishop, Pattern recognition and machine learning. Information science and statistics, New York, NY: Springer, ”2006”.
* [27] G. Bradski, “The opencv library,” Dr. Dobb’s Journal of Software Tools, 2000\.
* [28] E. Rublee, V. Rabaud, K. Konolige, and G. Bradski, “Orb: An efficient alternative to sift or surf,” International Conference on Computer Vision, pp. 2564–2571, 2011.
* [29] D. Lowe, “Distinctive image features from scale-invariant keypoints,” International Journal of Computer Vision, vol. 60, pp. 91–110, 11 2004.
* [30] H. Bay, T. Tuytelaars, and L. Van Gool, “Surf: Speeded up robust features,” European Conference on Computer Vision, vol. 3951, pp. 404–417, 07 2006\.
* [31] P. F. Alcantarilla, J. Nuevo, and A. Bartoli, “Fast explicit diffusion for accelerated features in nonlinear scale spaces,” British Machine Vision Conference, 2013.
* [32] S. P. Lloyd, “Least squares quantization in pcm,” Information Theory, IEEE Transactions, vol. 28.2, pp. 129–137, 1982.
* [33] D. Nister and H. Stewenius, “Scalable recognition with a vocabulary tree,” IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 2, pp. 2161–2168, 2006.
* [34] T. Erven and P. Harremoës, “Rényi divergence and kullback-leibler divergence,” Information Theory, IEEE Transactions on, vol. 60, pp. 3797–3820, 2014.
* [35] L. G. Valiant, “A theory of the learnable,” Commun. ACM, vol. 27, p. 1134–1142, Nov. 1984.
* [36] H. Pesonen, “A framework for bayesian receiver autonomous integrity monitoring in urban navigation,” NAVIGATION: Journal of the Institute of Navigation, vol. 58, pp. 229–240, 09 2011.
|
# Chance constrained sets approximation:
A probabilistic scaling approach - EXTENDED VERSION
M. Mammarella<EMAIL_ADDRESS>V. Mirasierra<EMAIL_ADDRESS>M. Lorenzen<EMAIL_ADDRESS>T. Alamo<EMAIL_ADDRESS>F. Dabbene
<EMAIL_ADDRESS>CNR-IEIIT; c/o Politecnico di Torino; C.so Duca
degli Abruzzi 24, Torino; Italy. Universidad de Sevilla, Escuela Superior de
Ingenieros, Camino de los Descubrimientos s/n, Sevilla; Spain.
Systemwissenschaften, TTI GmbH, Nobelstr. 15, 70569 Stuttgart, Germany
###### Abstract
In this paper, a sample-based procedure for obtaining simple and computable
approximations of chance-contrained sets is proposed. The procedure allows to
control the complexity of the approximating set, by defining families of
simple-approximating sets of given complexity. A probabilistic scaling
procedure then allows to rescale these sets to obtain the desired
probabilistic guarantees. The proposed approach is shown to be applicable in
several problem in systems and control, such as the design of Stochastic Model
Predictive Control schemes or the solution of probabilistic set membership
estimation problems.
## 1 Introduction
In real-world applications, the complexity of the phenomena encountered and
the random nature of data makes dealing with uncertainty essential. In many
cases, uncertainty arises in the modeling phase, in some others it is
intrinsic to both the system and the operative environment, as for instance
wind speed and turbulence in aircraft or wind turbine control [1]. Hence, it
is crucial to include underlying stochastic characteristic of the framework
and eventually accept a violation of constraints with a certain probability
level, in order to improve the coherence of the model and reality. Deriving
results in the presence of uncertainty is of major relevance in different
areas, including, but not limited to, optimization [2] and robustness analysis
[3]. However, with respect to robust approaches, where the goal is to
determine a feasible solution which is optimal in some sense for all possible
uncertainty instances , the goal in the stochastic framework is to find a
solution that is feasible for almost all possible uncertainty realizations,
[4, 5]. In several applications, including engineering and finance, where
uncertainties in price, demand, supply, currency exchange rate, recycle and
feed rate, and demographic condition are common, it is acceptable, up to a
certain safe level, to relax the inherent conservativeness of robust
constraints enforcing probabilistic constraints. More recently, the method has
been used also in unmanned autonomous vehicle navigation [6, 7] as well as
optimal power flow [8, 9].
In the optimization framework, constraints involving stochastic parameters
that are required to be satisfied with a pre-specified probability threshold
are called chance constraints (CC). In general, dealing with CC implies facing
two serious challenges, that of stochasticity and of nonconvexity [10].
Consequently, while being attractive from a modeling viewpoint, problems
involving CC are often computationally intractable, generally shown to be NP-
hard, which seriously limits their applicability. However, being able to
efficiently solve CC problems remains an important challenge, especially in
systems and control, where CC often arise, as e.g. in stochastic model
predictive control (SMPC) [11, 12]. The scientific community has devoted large
research in devising computationally efficient approaches to deal with chance-
constraints. We review such techniques in Section 3, where we highlight three
mainstream approaches: i) exact techniques; ii) robust approximations and iii)
sample-based approximations . In this paper, we present what we consider an
important step forward in the sample-based approach. We propose a simple and
efficient strategy to obtain a probabilistically guaranteed inner
approximation of a chance constrained set, with given confidence.
In particular, we describe a two step procedure the involves: i) the
preliminary approximation of the chance constraint set by means of a so-called
Simple Approximating Set (SAS), ii) a sample-used scaling procedure that
allows to properly scale the SAS so to guarantee the desired probabilistic
properties. The proper selection of a low-complexity SAS allows the designer
to easily tune the complexity of the approximating set, significantly reducing
the sample complexity. We propose several candidate SAS shapes, grouped in two
classes: i) sampled-polytopes; and ii) norm-based SAS.
The probabilistic scaling approach was presented in the conference papers [13,
14]. The present work extends these in several directions: first, we performe
here a thorough mathematical analysis the results, providing of all results.
Second, the use of norm-based SAS is extended to comprise more general sets
(as e.g. , and More importantly, we consider here joint chance constraints.
This choice is motivated by the fact that enforcing joint chance constraints,
which have to be satisfied simultaneously, adheres better to some
applications, despite the inherent complexity. Finally, we present here a
second application, besides SMPC, related to probabilistic set-membership
identification.
The paper is structured as follows. Section 2 provides a general preamble of
the problem formulation and of chance constrained optimization, including two
motivating examples. An extensive overview on methods for approximating chance
constrained sets is reported in Section 3 whereas the probabilistic scaling
approach has been detailed in Section 4. Section 5 and Section 6 are dedicated
to the definition of selected candidate SAS, i.e. sampled-polytope and norm-
based SAS, respectively. Last, in Section 7, we validate the proposed approach
with a numerical example applying our method to a probabilistic set membership
estimation problem. Main conclusions and future research directions are
addressed in Section 8.
### 1.1 Notation
Given an integer $N$, $[N]$ denotes the integers from 1 to $N$. Given
$z\in\mathbb{R}^{s}$ and $p\in[1,\infty)$, we denote by $\|z\|_{p}$ the
$\ell_{p}$-norm of $z$, and by
$\mathbb{B}^{s}_{p}\doteq\\{\;z\in\mathbb{R}^{s}\;:\;\|z\|_{p}\leq 1\;\\}$
$\ell_{p}$-norm ball of radius one. Given integers $k,N$, and parameter
$p\in(0,1)$, the Binomial cumulative distribution function is denoted as
$\mathbf{B}(k;N,p)\doteq\sum\limits_{i=0}^{k}\left(\begin{array}[]{c}N\\\ i\\\
\end{array}\right)p^{i}(1-p)^{N-i}.$ (1)
The following notation is borrowed from the field of order statistics [15].
Given a set of $N$ scalars $\gamma_{i}\in\mathbb{R}^{N}$, $i\in[N]$, we denote
$\gamma_{1:N}$ the smallest one, $\gamma_{2:N}$ the second smallest one, and
so on and so forth until $\gamma_{N:N}$, which is equal to the largest one. In
this way, given $r\geq 0$ we have that $\gamma_{r+1:N}$ satisfies that no more
than $r$ elements of $\\{\gamma_{1},\gamma_{2},\ldots,\gamma_{N}\\}$ are
strictly smaller than $\gamma_{r+1:N}$.
The Chebyshev center of a given set $\mathbb{X}$, denoted as
$\mathsf{Cheb}(\mathbb{X})$, is defined as the center of the largest ball
inscribed in $\mathbb{X}$, i.e.
$\mathsf{Cheb}(\mathbb{X})\doteq\arg\min_{\theta_{c}}\max_{\theta\in\mathbb{X}}\left\\{\|\theta-\theta_{c}\|^{2}\right\\}.$
Given an $\ell_{p}$-norm $\|\cdot\|_{p}$, its dual norm $\|\cdot\|_{p^{*}}$ is
defined as
$\|c\|_{p^{*}}\doteq\sup\limits_{z\in\mathbb{B}^{s}_{p}}c^{\top}z,\;\forall
c\in\mathbb{R}^{s}.$
In particular, the couples $(p,p^{*})$: $(2,2)$, $(1,\infty)$, $(\infty,1)$
give raise to dual norms.
## 2 Problem formulation
Consider a robustness problem, in which the controller parameters and
auxiliary variables are parametrized by means of a decision variable vector
$\theta$, which is usually referred to as design parameter and is restricted
to a set $\Theta\subseteq\mathbb{R}^{n_{\theta}}$. Furthermore, the
uncertainty vector $w\in\mathbb{R}^{n_{w}}$ represents one of the admissible
uncertainty realizations of a random vector with given probability
distribution $\mathsf{Pr}_{\mathbb{W}}$ and (possibly unbounded) support
$\mathbb{W}$.
This paper deals with the special case where the design specifications can be
decoded as a set of $n_{\ell}$ uncertain linear inequalities
$F(w)\theta\leq g(w),$ (2)
where
$F(w)=\begin{bmatrix}f_{1}^{\top}(w)\\\ \vdots\\\
f_{n_{\ell}}^{\top}(w)\end{bmatrix}\in\mathbb{R}^{n_{\ell}\times{n_{\theta}}},\quad
g(w)=\begin{bmatrix}g_{1}(w)\\\ \vdots\\\
g_{n_{\ell}}(w)\end{bmatrix}\in\mathbb{R}^{n_{\ell}},$
are measurable functions of the uncertainty vector $w\in\mathbb{R}^{n_{w}}$.
The inequality in (2) is to be interpreted component-wise, i.e.
$f_{\ell}(w)\theta\leq g_{\ell}(w),\forall\ell\in[n_{\ell}].$
Furthermore, we notice that each value of $w$ gives raise to a corresponding
set
$\mathbb{X}(w)=\\{\;\theta\in\Theta\;:\;F(w)\theta\leq g(w)\;\\}.$ (3)
Due to the random nature of the uncertainty vector $w$, each realization of
$w$ corresponds to a different set of linear inequalities. Consequently, each
value of $w$ gives raise to a corresponding set
$\mathbb{X}(w)=\\{\;\theta\in\Theta\;:\;F(w)\theta\leq g(w)\;\\}.$ (4)
In every application, one usually accepts a risk of violating the constraints.
While this is often done by choosing the set $\mathbb{W}$ appropriately, we
can find a less conservative solution by choosing the set $\mathbb{W}$ to
encompass all possible values and characterizing the region of the design
space $\Theta$ in which the fraction of elements of $\mathbb{W}$, that violate
the constraints, is below a specified level. This concept is rigorously
formalized by means of the notion of _probability of violation_.
###### Definition 1 (Probability of violation).
Consider a probability measure ${\rm Pr}_{\mathbb{W}}$ over $\mathbb{W}$ and
let $\theta\in\Theta$ be given. The probability of violation of $\theta$
relative to inequality (2) is defined as
$\mathsf{Viol}(\theta)\doteq\mathsf{Pr}_{\mathbb{W}}\,\\{\,F(w)\theta\not\leq
g(w)\,\\}.$
Given a constraint on the probability of violation, i.e.
$\mathsf{Viol}(\theta)\leq\varepsilon$, we denote as (joint) _chance
constrained set_ of probability $\varepsilon$ (shortly, $\varepsilon$-CCS) the
region of the design space for which this probabilistic constraint is
satisfied. This is formally stated in the next definition.
###### Definition 2 ($\varepsilon$-CCS).
Given $\varepsilon\in(0,1)$, we define the chance constrained set of
probability $\varepsilon$ as follows
$\mathbb{X}_{\varepsilon}=\\{\;\theta\in\Theta\;:\;\mathsf{Viol}(\theta)\leq\varepsilon\;\\}.$
(5)
Note that the $\varepsilon$-CCS represents the region of the design space
$\Theta$ for which this probabilistic constraint is satisfied and it is
equivalently defined as
$\mathbb{X}_{\varepsilon}\doteq\Bigl{\\{}\theta\in\Theta\;:\;\mathsf{Pr}_{\mathbb{W}}\left\\{F(w)\theta\leq
g(w)\right\\}\geq 1-\varepsilon\Bigr{\\}}.$ (6)
###### Remark 1 (Joint vs. individual CCs).
The constraint $\theta\in\mathbb{X}_{\varepsilon}$, with
$\mathbb{X}_{\varepsilon}$ defined in (6), describes a joint chance
constraint. That is, it requires that the joint probability of satisfying the
inequality constraint
$F(w)\theta\leq g(w)$
is guaranteed to be greater than the probabilistic level $1-\varepsilon$. We
remark that this constraint is notably harder to impose than individual CCs,
i.e. constraints of the form
$\displaystyle\theta\in\mathbb{X}_{\varepsilon_{\ell}}^{\ell}\,\,$
$\displaystyle\\!\\!\\!\\!\doteq\\!\\!\\!$
$\displaystyle\Bigl{\\{}\theta\in\Theta\,:\,\mathsf{Pr}_{\mathbb{W}}\left\\{f_{\ell}(w)^{\top}\theta\leq
g_{\ell}(w)\right\\}\geq 1-\varepsilon_{\ell}\Bigr{\\}},$
$\displaystyle\qquad\ell\in[n_{\ell}],$
with $\varepsilon_{\ell}\in(0,1)$. A discussion on the differences and
implications of joint and individual chance constraints may be found in
several papers, see for instance [10, 16] and references therein.
###### Example 1.
A simple illustrating example of the set $\varepsilon$-CCS is shown in Figure
1. The dotted circle is the region of the design space that satisfies all the
constraints (the so called robust region), which are tangent to the dotted
circle at points uniformly generated. The outer red circle represents the
chance constrained set $\mathbb{X}_{\varepsilon}$ for the specific value
$\varepsilon=0.15$. That is, the red circle is obtained in such a way that
every point in it has a probability of violating a random constraint no larger
than $0.15$. Note that in this very simple case, the set
$\mathbb{X}_{\varepsilon}$ can be computed analytically, and turns out to be a
scaled version of the robust set. We observe that the $\varepsilon$-CCS is
significantly larger than the robust set.
Figure 1: Red circle = $\mathbb{X}_{\varepsilon}$, dotted circle = unit
circle, blue lines = constraint samples.
Hence, while there exist simple examples for which a closed-form computation
of $\mathbb{X}_{\varepsilon}$ is possible, as the one re-proposed here and
first used in [13], we remark that this is not the case in general. Indeed, as
pointed out in [10], typically the computation of the $\varepsilon$-CCS is
extremely difficult, since the evaluation of the probability
$\mathsf{Viol}(\theta)$ amounts to the computation of a multivariate integral,
which is NP-Hard [17].
Moreover, the set $\varepsilon$-CCS is often nonconvex, except for very
special cases. For example, [1, 18] show that the solution set of separable
chance constraints can be written as the union of cones, which is nonconvex in
general.
###### Example 2 (Example of nonconvex $\varepsilon$-CCS).
To illustrate these inherent difficulties, we consider the following three-
dimensional example ($n_{\theta}=3$) with $w=\left\\{w_{1},w_{2}\right\\}$,
where the first uncertainty $w_{1}\in\mathbb{R}^{3}$ is a three-dimensional
normal-distributed random vector with zero mean and covariance matrix
$\Sigma=\left[\begin{array}[]{ccc}4.5&2.26&1.4\\\ 2.26&3.58&1.94\\\
1.4&1.94&2.19\end{array}\right],$
and the second uncertainty $w_{2}\in\mathbb{R}^{3}$ is a three-dimensional
random vector whose elements are uniformly distributed in the interval
$[0,1]$. The set of viable design parameters is given by $n_{\ell}=4$
uncertain linear inequalities of the form
$F(w)\theta\leq\mathbf{1}_{4},\quad
F(w)=\left[\begin{array}[]{cccc}w_{1}&w_{2}&(2w_{1}-w_{2})&w_{1}^{2}\end{array}\right]^{\top}.$
(7)
The square power $w_{1}^{2}$ is to be interpreted element-wise.
In this case, to obtain a graphical representation of the set
$\mathbb{X}_{\varepsilon}$, we resorted to gridding the set $\Theta$ and, for
each point $\theta$ in the grid, to approximate the probability through a
Monte Carlo computation. This procedure is clearly unaffordable for higher
dimensions frameworks. In Figure 2 we report the plot of the computed
$\varepsilon$-CCS set for different values of $\varepsilon$. We observe that
the set is indeed nonconvex.
Figure 2: The $\varepsilon$-CCS set for $\varepsilon=0.15$ (smaller set),
$\varepsilon=0.30$ (intermediate set), and $\varepsilon=0.45$ (larger set). We
observe that all sets are nonconvex, but the nonconvexity is more evident for
larger values of $\varepsilon$, corresponding to larger levels of accepted
violation, while the set $\mathbb{X}_{\varepsilon}$ appears “almost convex”
for small values of $\varepsilon$. This kind of behaviour is in accordance
with a recent result that prove convexity of the $\varepsilon$-CCS for values
of $\varepsilon$ going to zero, and it is usually referred to as eventual
convexity [19].
### 2.1 Chance constrained optimization
Finding an optimal $\theta\in\mathbb{X}_{\varepsilon}$ for a given cost
function $J:~{}\mathbb{R}^{n_{\theta}}\rightarrow\mathbb{R}$, leads to the
chance constrained optimization (CCO) problem
$\min_{\theta\in\mathbb{X}_{\varepsilon}}J(\theta),$ (8)
where the cost-function $J(\theta)$ is usually assumed to be a convex, often
even a quadratic or linear function.
We remark that the solution of the CCO problem (8) is in general NP-hard, for
the same reasons reported before. We also note that several stochastic
optimization problems arising in different application contexts can be
formulated as a CCO. Typical examples are for instance the reservoir system
design problem proposed in [20], where the problem is to minimize the total
building and penalty costs while satisfying demands for all sites and all
periods with a given probability, or the cash matching problem [21], where one
aims at maximizing the portfolio value at the end of the planning horizon
while covering all scheduled payments with a prescribed probability. CCO
problems also frequently arise in short-term planning problems in power
systems. These optimal power flow (OPF) problems are routinely solved as part
of the real-time operation of the power grid. The aim is determining minimum-
cost production levels of controllable generators subject to reliably
delivering electricity to customers across a large geographical area, see e.g.
[8] and references therein.
In the next subsections, we report two control-related problems which served
as motivation of our study.
### 2.2 First motivating example: Stochastic MPC
To motivate the proposed approach, we consider the Stochastic MPC framework
proposed in [12, 11]. We are given a discrete-time system
$x_{k+1}=A(\sigma_{k})x_{k}+B(\sigma_{k})u_{k}+a_{\sigma}(\sigma_{k}),$ (9)
subject to generic uncertainty $\sigma_{k}\in\mathbb{R}^{n_{\sigma}}$, with
state $x_{k}\in\mathbb{R}^{n_{x}}$, control input
$u_{k}\in\mathbb{R}^{n_{u}}$, and the vector valued function
$a_{\sigma}(\sigma_{k})$ representing additive disturbance affecting the
system state. The system matrices $A(\sigma_{k})$ and $B(\sigma_{k})$, of
appropriate dimensions, are (possibly nonlinear) functions of the uncertainty
$\sigma_{k}$ at step $k$. For $k=1,2,\ldots$, the disturbances $\sigma_{k}$
are modeled as realizations of a stochastic process. In particular,
$\sigma_{k}$ are assumed to be independent and identically distributed (iid)
realizations of zero-mean random variables with support
$\mathcal{S}\subseteq\mathbb{R}^{n_{\sigma}}$. Note that the presence of both
additive and multiplicative uncertainty, combined with the nonlinear
dependence on the uncertainty, renders the problem particularly arduous.
Furthermore, we remark that the system representation in (9) is very general,
and encompasses, among others, those in [11, 12, 22].
Given the model (9) and a realization of the state $x_{k}$ at time $k$, state
predictions $t$ steps ahead are random variables as well and are denoted
$x_{t|k}$, to differentiate it from the realization $x_{t+k}$. Similarly
$u_{t|k}$ denotes predicted inputs that are computed based on the realization
of the state $x_{k}$.
Contrary to [11, 12, 22], where the system dynamics were subject to individual
state and input chance constraints, here we take a more challenging route, and
we consider joint state and input chance constraints of the form 111The case
where one wants to impose hard input constraints can be also be formulated in
a similar framework, see e.g. [11].
$\mathsf{Pr}_{\boldsymbol{\sigma}}\left\\{H_{x}x_{t|k}+H_{u}u_{t|k}\leq\mathbf{1}_{n_{t}}|x_{k}\right\\}\geq
1-\varepsilon,$ (10)
with $t\in\\{0,\ldots,T-1\\}$, $\varepsilon\in(0,1)$, and
$H_{x}\in\mathbb{R}^{n_{\ell}\times n_{x}}$,
$H_{u}\in\mathbb{R}^{n_{\ell}\times n_{u}}$.
The probability $\mathsf{Pr}_{\boldsymbol{\sigma}}$ is measured with respect
to the sequence ${\boldsymbol{\sigma}}=\\{\sigma_{t}\\}_{t>k}$. Hence,
equation (10) states that the probability of violating the linear constraint
$H_{x}x+H_{u}u\leq 1$ for any future realization of the disturbance should not
be larger than $\varepsilon$.
The objective is to derive an asymptotically stabilizing control law for the
system (9) such that, in closed loop, the constraint (10) is satisfied.
Following the approach in [12], a stochastic MPC algorithm is considered to
solve the constrained control problem. The approach is based on repeatedly
solving a stochastic optimal control problem over a finite, moving horizon,
but implementing only the first control action. The design parameter $\theta$
is then given by the control sequence
$\mathbf{u}_{k}=(u_{0|k},u_{1|k},...,u_{T-1|k})$ and the prototype optimal
control problem to be solved at each sampling time $k$ is defined by the cost
function
$\displaystyle J_{T}(x_{k},\mathbf{u}_{k})=$
$\displaystyle\mathbb{E}\left\\{\sum_{t=0}^{T-1}\left(x_{t|k}^{\top}Qx_{t|k}+u_{t|k}^{\top}Ru_{t|k}\right)+x_{T|k}^{\top}Px_{T|k}~{}|~{}x_{k}\right\\},$
with $Q\in\mathbb{R}^{n_{x}\times n_{x}}$, $Q\succeq 0$,
$R\in\mathbb{R}^{n_{u}\times n_{u}}$, $R\succ 0$, and appropriately chosen
$P\succ 0$, subject to the system dynamics (9) and constraints (10).
The online solution of the stochastic MPC problem remains a challenging task
but several special cases, which can be evaluated exactly, as well as methods
to approximate the general solution have been proposed in the literature. The
approach followed in this work was first proposed in [11, 12], where an
offline sampling scheme was introduced. Therein, with a prestabilizing input
parameterization
$u_{t|k}=Kx_{t|k}+v_{t|k},$ (12)
with suitably chosen control gain $K\in\mathbb{R}^{n_{u}\times n_{x}}$ and new
design parameters $v_{t|k}\in\mathbb{R}^{n_{u}}$, equation (9) is solved
explicitly for the predicted states $x_{1|k},\ldots,x_{T|k}$ and predicted
inputs $u_{0|k},\ldots,u_{T-1|k}$. In this case, the expected value of the
finite-horizon cost (2.2) can be evaluated offline, leading to a quadratic
cost function of the form
$J_{T}(x_{k},\mathbf{v}_{k})=\begin{bmatrix}x_{k}^{\top}&\textbf{v}_{k}^{\top}&\textbf{1}_{n_{x}}^{\top}\end{bmatrix}\tilde{S}\begin{bmatrix}x_{k}\\\
\textbf{v}_{k}\\\ \textbf{1}_{n_{x}}\\\ \end{bmatrix}$ (13)
in the deterministic variables
$\mathbf{v}_{k}=(v_{0|k},v_{1|k},...,v_{T-1|k})$ and $x_{k}$.
Focusing now on the constraint definition, we notice that by introducing the
uncertainty sequence
$\boldsymbol{\sigma}_{k}=\\{\sigma_{t}\\}_{t=k,...,k+T-1}$, we can rewrite the
joint chance constraint defined by equation (10) as
$\displaystyle\mathbb{X}_{\varepsilon}^{\textsc{smpc}}=\left\\{\
\begin{bmatrix}x_{k}\\\
\mathbf{v}_{k}\end{bmatrix}\in\mathbb{R}^{n_{x}+n_{u}T}~{}:~{}\right.$
$\displaystyle\Bigl{.}\mathsf{Pr}_{\boldsymbol{\sigma}_{k}}\left\\{\begin{bmatrix}f_{\ell}^{x}(\boldsymbol{\sigma}_{k})\\\
f_{\ell}^{v}(\boldsymbol{\sigma}_{k})\end{bmatrix}^{\top}\begin{bmatrix}x_{k}\\\
\mathbf{v}_{k}\end{bmatrix}\leq 1,\ell\in[n_{\ell}]\right\\}\geq
1-\varepsilon\Bigr{\\}},$ (14)
with
$f_{\ell}^{x}:\mathbb{R}^{n_{\sigma}}\to\mathbb{R}^{n_{x}},f_{\ell}^{v}:\mathbb{R}^{n_{\sigma}}\to\mathbb{R}^{n_{u}T}$
being known functions of the sequence of random variables
$\boldsymbol{\sigma}_{k}$. We remark that, in the context of this paper,
neither the detailed derivation of the cost matrix $\tilde{S}$ in (13) nor
that of $f_{\ell}^{v},f_{\ell}^{x}$ are relevant for the reader, who can refer
to [12, Appendix A] for details. Note that, by defining
$\theta=[x_{k}^{\top},\mathbf{v}_{k}^{\top}]^{\top}$, (14) is given in the
form of (5) .
As discussed in [11], obtaining a good and simple enough approximation of the
set $\mathbb{X}_{\varepsilon}^{\textsc{smpc}}$ is extremely important for
online implementation of SMPC schemes. In particular, if we are able to
replace the set $\mathbb{X}_{\varepsilon}^{\textsc{smpc}}$ by a suitable inner
approximation, we would be able to guarantee probabilistic constraint
satisfaction of the ensuing SMPC scheme. On the other hand, we would like this
inner approximation to be simple enough, so to render the online computations
fast enough.
### 2.3 Second motivating example: probabilistic set membership estimation
Suppose that there exists $\bar{\theta}\in\Theta$ such that
$|y-\bar{\theta}^{T}\varphi(x)|\leq\rho,\;\forall(x,y)\in\mathbb{W}\subseteq\mathbb{R}^{n_{x}}\times\mathbb{R},$
where $\varphi:\mathbb{R}^{n_{x}}\to\mathbb{R}^{n_{\theta}}$ is a (possibly
non-linear) regressor function, and $\rho>0$ accounts for modelling errors.
The (deterministic) set membership estimation problem, see [23], [24],
consists of computing the set of parameters $\theta$ that satisfy the
constraint
$|y-\theta^{T}\varphi(x)|\leq\rho$
for all possible values of $(x,y)\in\mathbb{W}$. In the literature, this set
is usually referred to as the feasible parameter set, that is
${\mathsf{FPS}}\doteq\\{\;\theta\in\Theta\;:\;|y-\theta^{T}\varphi(x)|\leq\rho,\;\forall(x,y)\in\mathbb{W}\;\\}.$
(15)
If, for given $w=(x,y)$, we define the set
$\mathbb{X}(w)=\\{\;\theta\in\Theta\;:\;|y-\theta^{T}\varphi(x)|\leq\rho\;\\},$
then the feasible parameter set ${\mathsf{FPS}}$ can be rewritten as
${\mathsf{FPS}}=\\{\;\theta\in\Theta\;:\;\theta\in\mathbb{X}(w),\;\forall
w\in\mathbb{W}\;\\}.$
The deterministic set membership problem suffers from the following
limitations in real applications: i) due to the possible non-linearity of
$\varphi(\cdot)$, checking if a given $\theta\in\Theta$ satisfies the
constraint $\theta\in\mathbb{X}(w)$, for every $w\in\mathbb{W}$, is often a
difficult problem; ii) in many situations, only samples of $\mathbb{W}$ are
available: thus, the robust constraint cannot be checked and only outer bounds
of ${\mathsf{FPS}}$ can be computed; and iii) because of outliers and possible
non finite support of $\mathbb{W}$, set ${\mathsf{FPS}}$ is often empty
(especially for small values of $\rho$).
If a probability distribution is defined on $\mathbb{W}$, the probabilistic
set membership estimation problem is that of characterizing the set of
parameters $\theta$ that satisfy
$\mathsf{Pr}_{\mathbb{W}}\\{|y-\theta^{T}\varphi(x)|\leq\rho\\}\geq
1-\epsilon,$
for a given probability parameter $\epsilon\in(0,1)$. Hence, we can define
${\mathsf{FPS}}_{\epsilon}$ the set of parameters that satisfy the previous
probabilistic constraint, that is,
${\mathsf{FPS}}_{\epsilon}=\\{\;\theta\in\Theta\;:\;\mathsf{Pr}_{\mathbb{W}}\\{\theta\in\mathbb{X}(w)\\}\geq
1-\epsilon\;\\}.$
It is immediate to notice that this problem fits in the formulation proposed
in this section: It suffices to define
$F(w)=\left[\begin{array}[]{c}\varphi^{T}(x)\\\
-\varphi^{T}(x)\end{array}\right],\;g(w)=\left[\begin{array}[]{c}\rho+y\\\
\rho-y\end{array}\right].$
### 2.4 Chance constrained approximations
Motivated by the discussion above, we are ready to formulate the main problem
studied in this paper.
###### Problem 1 ($\varepsilon$-CCS approximation).
Given the set of linear inequalities (2), and a violation parameter
$\varepsilon$, find an inner approximation of the set
$\mathbb{X}_{\varepsilon}$. The approximation should be: i) simple enough, ii)
easily computable.
A solution to this problem is provided in the paper. In particular, regarding
i), we present a solution in which the approximating set is represented by few
linear inequalities. Regarding ii), we propose a computationally efficient
procedure for its construction (see Algorithm 1).
Before presenting our approach, in the next section we provide a brief
literature overview of different methods presented in the literature to
construct approximations of the $\varepsilon$-CCS set.
## 3 Overview on different approaches to $\varepsilon$-CCS approximations
The construction of computational efficient approximations to
$\varepsilon$-CCS is a long-standing problem. In particular, the reader is
referred to the recent work [10], which provides a rather complete discussion
on the topic, and covers the most recent results. The authors distinguish
three different approaches, which we very briefly revisit here.
### 3.1 Exact techniques
In some very special cases, the $\varepsilon$-CCS is convex and hence the CCO
problem admits a unique solution. This is the case, for instance, of
individual chance constraints with $w$ being Gaussian [25]. Other important
examples of convexity of the set $\mathbb{X}_{\varepsilon}$ involve log-
concave distribution [1, 26]. General sufficient conditions on the convexity
of chance constraints may be found in [27, 28, 29, 19]. However, all these
cases are very specific and hardly extend to joint chance constraints
considered on this work.
### 3.2 Robust techniques
A second class of approaches consist in finding deterministic conditions that
allow to construct a set $\underline{\mathbb{X}}$, which is a guaranteed inner
convex approximation of the probabilistic set $\mathbb{X}_{\varepsilon}$. The
classical solution consists in the applications of Chebyshev-like
inequalities, see e.g. [30, 31]. More recent techniques, which are proved
particularly promising, involve robust optimization [3], as the convex
approximations introduced in [32]. A particular interesting convex relaxation
involves the so-called Conditional Value at Risk (CVaR), see [33] and
references therein. Finally, we point out some recent techniques based on
polynomial moments relaxations [34, 35]. Nonetheless, it should be remarked
that these techniques usually suffer from conservatism and computational
complexity issues, especially in the case of joint chance constraints.
### 3.3 Sample-based techniques
In recent years, a novel approach to approximate chance constraints, based on
random sampling of the uncertain parameters, has gained popularity, see e.g.
[4, 5] and references therein. Sampling-based techniques are characterized by
the use of a finite number $N$ of iid samples of the uncertainty
$\left\\{w^{(1)},w^{(2)},\ldots,w^{(N)}\right\\}$ drawn according to a
probability distribution $\mathsf{Pr}_{\mathbb{W}}$. To each sample
$w^{(i)},i\in[N]$, we can associate the following sampled set
$\mathbb{X}(w^{(i)})=\\{\;\theta\in\Theta\;:\;F(w^{(i)})\theta\leq
g(w^{(i)})\;\\},$ (16)
sometimes referred to as scenario, since it represents an observed instance of
our probabilistic constraint.
Then, the scenario approach considers the CCO problem (8) and approximates its
solution through the following scenario problem
$\displaystyle\theta^{*}_{sc}=\arg\min J(\theta)$ (17)
$\displaystyle\text{subject to }\theta\in\mathbb{X}(w^{(i)}),i\in[N].$
We note that, if the function $J(\theta)$ is convex, problem (17) becomes a
linearly constrained convex program, for which very efficient solution
approaches exist. A fundamental result [36, 37, 38, 39] provides a
probabilistic certification of the constraint satisfaction for the solution to
the scenario problem. In particular, it is shown that, under some mild
assumptions (non-degenerate problem), we have
$\mathsf{Pr}_{\mathbb{W}^{N}}\left\\{\mathsf{Viol}(\theta^{*}_{sc})>\varepsilon\right\\}\leq\mathbf{B}(n_{\theta}-1;N,\varepsilon),$
(18)
where the probability in (18) is measured with respect to the samples
$\\{w^{(1)},w^{(2)},\ldots,w^{(N)}$}. Moreover, the bound in (18) is shown to
be tight. Indeed, for the class of so-called fully-supported problems, the
bound holds with equality, i.e. the Binomial distribution
$\mathbf{B}(n_{\theta}-1;N,\varepsilon)$ represents the exact probability
distribution of the violation probability [37].
A few observations are at hand regarding the scenario approach and its
relationship with Problem 1. First, if we define the sampled constraints set
as
$\mathbb{X}_{N}\doteq\bigcap_{i=1}^{N}\mathbb{X}(w^{(i)}),$ (19)
we see that the scenario approach consists in approximating the constraint
$\theta\in\mathbb{X}_{\varepsilon}$ in (8) with its sampled version
$\theta\in\mathbb{X}_{N}$. On the other hand, it should be remarked that the
scenario approach cannot be used to derive any guarantee on the relationship
existing between $\mathbb{X}_{N}$ and $\mathbb{X}_{\varepsilon}$. Indeed, the
nice probabilistic property in (18) holds only for the optimum of the scenario
program $\theta^{*}_{sc}$. This is a fundamental point, since the scenario
results build on the so-called support constraints, which are defined for the
optimum point $\theta^{*}_{sc}$ only.
On the contrary, in our case we are interested in establishing a direct
relation (in probabilistic terms) between the set $\mathbb{X}_{N}$ and the
$\varepsilon$-CCS $\mathbb{X}_{\varepsilon}$. This is indeed possible, but
needs to resort to results based on Statistical Learning Theory [40],
summarized in the following lemma.
###### Lemma 1 (Learning Theory bound).
Given probabilistic levels $\delta\in(0,1)$ and $\varepsilon\in(0,0.14)$, if
the number of samples $N$ is chosen so that $N\geq N_{LT}$, with
$N_{LT}\doteq\frac{4.1}{\varepsilon}\Big{(}\ln\frac{21.64}{\delta}+4.39n_{\theta}\,\log_{2}\Big{(}\frac{8en_{\ell}}{\varepsilon}\Big{)}\Big{)},$
(20)
then
$\mathsf{Pr}_{\mathbb{W}^{N}}\left\\{\mathbb{X}_{N}\subseteq\mathbb{X}_{\varepsilon}\right\\}\geq
1-\delta$.
The lemma, whose proof is reported in Appendix A.1, is a direct consequence of
the results on VC-dimension of the so-called $(\alpha,k)$-Boolean Function,
given in [41].
###### Remark 2 (Sample-based SMPC).
The learning theory-based approach discussed in this section has been applied
in [11] to derive an _offline_ probabilistic inner approximation of the chance
constrained set $\mathbb{X}_{\varepsilon}^{\textsc{smpc}}$ defined in (14),
considering individual chance constraints. In particular, the bound (2) is a
direct extension to the case of joint chance constraints of the result proved
in [11]. Note that since we are considering multiple constraints at the same
time (like in (2)), the number of constraints $n_{\ell}$ enters into the
sample size bound. To explain how the SMPC design in [11] extends to the joint
chance constraints framework, we briefly recall it.
First, we extract offline (i.e. when designing the SMPC control) $N$ iid
samples of the uncertainty, $\boldsymbol{\sigma}_{k}^{(i)}$ of
$\boldsymbol{\sigma}_{k}$, and we consider the sampled set
$\displaystyle\mathbb{X}^{\textsc{smpc}}(\boldsymbol{\sigma}_{k}^{(i)})=\Biggl{\\{}\
\begin{bmatrix}x_{k}\\\
\mathbf{v}_{k}\end{bmatrix}:\begin{bmatrix}f_{\ell}^{x}(\boldsymbol{\sigma}_{k}^{(i)})\\\
f_{\ell}^{v}(\boldsymbol{\sigma}_{k}^{(i)})\end{bmatrix}^{\top}\begin{bmatrix}x_{k}\\\
\mathbf{v}_{k}\end{bmatrix}\leq 1,\Biggl{.}\ell\in[n_{\ell}]\Biggr{\\}},$
and
$\mathbb{X}_{N}^{\textsc{smpc}}\doteq\bigcap_{i=1}^{N}\mathbb{X}^{\textsc{smpc}}(\boldsymbol{\sigma}_{k}^{(i)})$.
Then, applying Lemma 1 with $n_{\theta}=n_{x}+n_{u}T$, we conclude that if we
extract $N\geq N_{LT}^{\textsc{smpc}}$ samples, it is guaranteed that, with
probability at least $1-\delta$, the sample approximation
$\mathbb{X}_{N}^{\textsc{smpc}}$ is a subset of the original chance constraint
$\mathbb{X}_{\varepsilon}^{\textsc{smpc}}$. Exploiting these results, the SMPC
problem can be approximated conservatively by the linearly constrained
quadratic program
$\displaystyle\min_{\mathbf{v}_{k}}~{}J_{T}(x_{k},\mathbf{v}_{k})\textrm{
subject to }(x_{k},\mathbf{v}_{k})\in\mathbb{X}_{N}^{\textsc{smpc}}.$ (21)
Hence the result reduces the original stochastic optimization program to an
efficiently solvable quadratic program. This represents an undiscussed
advantage, which has been demonstrated for instance in [12]. On the other
hand, it turns out that the ensuing number of linear constraints, equal to
$n_{\ell}\cdot N_{LT}^{\textsc{smpc}}$ may still be too large. For instance,
even for a moderately sized MPC problem with $n_{x}=5$ states, $n_{u}=2$
inputs, prediction horizon of $T=10$, simple interval constraints on states
and inputs (i.e. $n_{\ell}=2n_{x}+2n_{u}=14$), and for a reasonable choice of
probabilistic parameters, i.e. $\varepsilon=0.05$ and $\delta=10^{-6}$, we get
$N_{LT}^{\textsc{smpc}}=114,530$, which in turn corresponds to more than $1.6$
million linear inequalities. For this reason, in [11] a post-processing step
was proposed to remove redundant constraints. While it is indeed true that all
the cumbersome computations may be performed offline, it is still the case
that, in applications with stringent requirements on the solution time, the
final number of inequalities may easily become unbearable.
Remark 2 motivates the approach presented in the next section, which builds
upon the results presented in [13]. We show how the probabilistic scaling
approach directly leads to approximations of user-chosen complexity, which can
be directly used in applications instead of creating the need for a post-
processing step to reduce the complexity of the sampled set.
## 4 The Probabilistic Scaling Approach
We propose a novel sample-based approach, alternative to the randomized
procedures proposed so far, which allows to maintain the nice probabilistic
features of these techniques, while at the same time providing the designer
with a way of tuning the complexity of the approximation.
The main idea behind this approach consists of first obtaining a simple
initial approximation of the shape of the probabilistic set
$\mathbb{X}_{\varepsilon}$ by exploiting scalable simple approximating sets
(Scalable SAS) of the form
${\mathbb{S}}(\gamma)=\theta_{c}\oplus\gamma{\mathbb{S}}.$ (22)
These sets are described by a center point $\theta_{c}$ and a low-complexity
shape set ${\mathbb{S}}$. The center $\theta_{c}$ and the shape ${\mathbb{S}}$
constitute the design parameters of the proposed approach. By appropriately
selecting the shape ${\mathbb{S}}$, the designer can control the complexity of
the approximating set.
Note that we do not ask this initial set to have any guarantee of
probabilistic nature. What we ask is that this set is being able to “capture”
somehow the shape of the set $\mathbb{X}_{\varepsilon}$. Recipes on a possible
procedure for constructing this initial set are provided in section 5. The set
${\mathbb{S}}$ constitutes the starting point of a scaling procedure, which
allows to derive a probabilistic guaranteed approximation of the
$\varepsilon$-CCS, as detailed in the next section. In particular, we show how
an optimal scaling factor $\gamma$ can be derived so that the set (22) is
guaranteed to be an inner approximation of $\mathbb{X}_{\varepsilon}$ with the
desired confidence level $\delta$. We refer to the set ${\mathbb{S}}(\gamma)$
as Scalable SAS.
### 4.1 Probabilistic Scaling
In this section, we address the problem of how to scale the set
${\mathbb{S}}(\gamma)$ around its center $\theta_{c}$ to guarantee, with
confidence level $\delta\in(0,1)$, the inclusion of the scaled set into
$\mathbb{X}_{\varepsilon}$. Within this sample-based procedure we assume that
$N_{\gamma}$ iid samples $\\{w^{(1)},\ldots,w^{(N_{\gamma})}\\}$ are obtained
from $\mathsf{Pr}_{\mathbb{W}}$ and based on these, we show how to obtain a
scalar $\bar{\gamma}>0$ such that
$\mathsf{Pr}_{\mathbb{W}^{N_{\gamma}}}\\{{\mathbb{S}}(\bar{\gamma})\subseteq\mathbb{X}_{\varepsilon}\\}\geq
1-\delta.$
To this end, we first define the scaling factor associated to a given
realisation of the uncertainty.
###### Definition 3 (Scaling factor).
Given a Scalable SAS ${\mathbb{S}}(\gamma)$, with given center $\theta_{c}$
and shape ${\mathbb{S}}\subset\Theta$, and a realization $w\in\mathbb{W}$, we
define the scaling factor of ${\mathbb{S}}(\gamma)$ relative to $w$ as
$\gamma(w)\doteq\left\\{\begin{array}[]{cc}0&\,\,\,\mbox{if}\;\theta_{c}\not\in\mathbb{X}(w)\\\
\max\limits_{{\mathbb{S}}(\gamma)\subseteq\mathbb{X}(w)}\gamma&\,\,\,\mbox{otherwise}.\end{array}\right.$
with $\mathbb{X}(w)$ defined as in (16).
That is $\gamma(w)$ represents the maximal scaling that can be applied to
${\mathbb{S}}(\gamma)=\theta_{c}\oplus\gamma{\mathbb{S}}$ around the center
$\theta_{c}$ so that ${\mathbb{S}}(\gamma)\subseteq\mathbb{X}(w)$. The
following theorem states how to obtain, by means of sampling, a scaling factor
$\bar{\gamma}$ that guarantees, with high probability, that
${\mathbb{S}}(\bar{\gamma})\subseteq\mathbb{X}_{\varepsilon}$.
###### Theorem 1 (Probabilistic scaling).
Given a candidate Scalable SAS ${\mathbb{S}}(\gamma)$, with
$\theta_{c}\in\mathbb{X}_{\varepsilon}$, accuracy parameter
$\varepsilon\in(0,1)$, confidence level $\delta\in(0,1)$, and a discarding
integer parameter $r\geq 0$, let $N_{\gamma}$ be chosen such that
$\mathbf{B}(r;N_{\gamma},\varepsilon)\leq\delta.$ (23)
Draw $N_{\gamma}$ iid samples $\\{w^{(1)},w^{(2)},\ldots,w^{(N_{\gamma})}\\}$
from distribution $\mathsf{Pr}_{\mathbb{W}}$, compute the corresponding
scaling factor
$\gamma_{i}\doteq\gamma(w^{(i)}),$ (24)
for $i\in[N_{\gamma}]$ according to Definition 3, and let
$\bar{\gamma}=\gamma_{1+r:N_{\gamma}}$. Then, with probability no smaller than
$1-\delta$,
${\mathbb{S}}(\bar{\gamma})=\theta_{c}\oplus\bar{\gamma}{\mathbb{S}}\subseteq\mathbb{X}_{\varepsilon}.$
Proof: If $\bar{\gamma}=0$, then we have
${\mathbb{S}}(\bar{\gamma})\equiv\theta_{c}\in\mathbb{X}_{\varepsilon}$.
Hence, consider $\bar{\gamma}>0$. From Property 1 in Appendix A.2, we have
that $\bar{\gamma}0$ satisfies, with probability no smaller than $1-\delta$,
that
$\mathsf{Pr}_{\mathbb{W}}\\{{\mathbb{S}}(\gamma)\not\subseteq\mathbb{X}(w)\\}\leq\varepsilon$.
Equivalently,
$\mathsf{Pr}_{\mathbb{W}}\\{{\mathbb{S}}(\gamma)\subseteq\mathbb{X}(w)\\}>1-\varepsilon.$
This can be rewritten as $\mathsf{Pr}_{\mathbb{W}}\\{F(w)^{\top}\theta\leq
g(w),\;\;\forall\theta\in{\mathbb{S}}(\gamma)\\}>1-\varepsilon,$ and it
implies that the probability of violation in
$\theta_{c}\oplus\bar{\gamma}{\mathbb{S}}$ is no larger than $\varepsilon$,
with probability no smaller than $1-\delta$. ∎
In the light of the theorem above, from now on we will assume that the
Scalable SAS is such that $\theta_{c}\in\mathbb{X}_{\varepsilon}$. The above
result leads to the following simple algorithm, in which we summarise the main
steps for constructing the scaled set, and we provide an explicit way of
determining the discarding parameter $r$.
Algorithm 1 Probabilistic SAS Scaling
1:Given a candidate Scalable SAS ${\mathbb{S}}(\gamma)$, and probability
levels $\varepsilon$ and $\delta$, choose
$N_{\gamma}\geq\frac{7.47}{\varepsilon}\ln\frac{1}{\delta}\quad\text{ and
}\quad r=\left\lfloor\frac{\varepsilon N_{\gamma}}{2}\right\rfloor.$ (25)
2:Draw $N_{\gamma}$ samples of the uncertainty
$w^{(1)},\ldots,w^{(N_{\gamma})}$
3:for $i=1$ to $N_{\gamma}$ do
4: Solve the optimization problem $\displaystyle\gamma_{i}\doteq$
$\displaystyle\max_{{\mathbb{S}}(\gamma)\subseteq\mathbb{X}(w^{(i)})}\gamma$
(26)
5:end for
6:Return $\bar{\gamma}=\gamma_{1+r:N_{\gamma}}$, the $(1+r)$-th smallest value
of $\gamma_{i}$.
A few comments are in order regarding the algorithm above. In step 4, for each
uncertainty sample $w^{(i)}$ one has to solve an optimization problem, which
amounts to finding the largest value of $\gamma$ such that
${\mathbb{S}}(\gamma)$ is contained in the set $\mathbb{X}(w^{(i)})$ defined
in (16). If the SAS is chosen accurately, we can show that this problem is
convex and computationally very efficient: this is discussed in Section 5.
Then, in step 6, one has to re-order the set
$\\{\gamma_{1},\gamma_{2},\ldots,\gamma_{N_{\gamma}}\\}$ so that the first
element is the smallest one, the second element is the second smallest one,
and so on and so fort, and then return the $r+1$-th element of the reordered
sequence. The following Corollary applies to Algorithm 1.
###### Corollary 1.
Given a candidate SAS set in the form
${\mathbb{S}}(\gamma)=\theta_{c}\oplus\gamma{\mathbb{S}}$, assume that
$\theta_{c}\in\mathbb{X}_{\varepsilon}$. Then, Algorithm 1 guarantees that
${\mathbb{S}}(\bar{\gamma})\subseteq\mathbb{X}_{\varepsilon}$ with probability
at least $1-\delta$.
Proof: The result is a direct consequence of Theorem 1, which guarantees that,
for given $r\geq 0$,
$\mathsf{Pr}\\{{\mathbb{S}}(\gamma)\subseteq\mathbb{X}_{\varepsilon}\\}$ is
guaranteed if the scaling is performed on a number of samples satisfying (23).
From [42, Corollary 1]) it follows that, in order to satisfy (23) it suffices
to take $N_{\gamma}$ such that
$N_{\gamma}\geq\frac{1}{\varepsilon}\left(r+\ln\frac{1}{\delta}+\sqrt{2r\ln\frac{1}{\delta}}\right).$
(27)
Since $r=\lfloor\frac{\varepsilon N}{2}\rfloor$, we have that
$r\leq\frac{\varepsilon N}{2}$. Thus, inequality (27) is satisfied if
$\displaystyle N_{\gamma}$ $\displaystyle\geq$
$\displaystyle\frac{1}{\varepsilon}\left(\frac{\varepsilon
N_{\gamma}}{2}+\ln\frac{1}{\delta}+\sqrt{\varepsilon
N_{\gamma}\ln\frac{1}{\delta}}\right)$ $\displaystyle=$
$\displaystyle\frac{N_{\gamma}}{2}+\frac{1}{\varepsilon}\ln\frac{1}{\delta}+\sqrt{N_{\gamma}\frac{1}{\varepsilon}\ln\frac{1}{\delta}}.$
Letting $\nabla\doteq\sqrt{N_{\gamma}}$ and
$\alpha\doteq\sqrt{\frac{1}{\varepsilon}\ln\frac{1}{\delta}}$222Note that both
quantities under square root are positive., the above inequality rewrites
$\nabla^{2}-2\alpha\nabla-2\alpha^{2}\geq 0,$ which has unique positive
solution $\nabla\geq(1+\sqrt{3})\alpha$. In turn, this rewrites as
$N_{\gamma}\geq\frac{(1+\sqrt{3})^{2}}{\varepsilon}\ln\frac{1}{\delta}.$
The formula (25) follows by observing that $(1+\sqrt{3})^{2}<~{}7.47$. ∎
In the next sections, we provide a “library” of possible candidates SAS
shapes. We remind that these sets need to comply to two main requirements: i)
being a simple and low-complexity representation; and ii) being able to
capture the original shape of the $\varepsilon$-CCS. Moreover, in the light of
the discussion after Algorithm 1, we also ask these sets to be convex.
## 5 Candidate SAS: Sampled-polytope
First, we note that the most straightforward way to design a candidate SAS is
again to recur to a sample-based procedure: we draw a fixed number $N_{S}$ of
“design” uncertainty samples333These samples are denoted with a tilde to
distinguish them from the samples used in the probabilistic scaling procedure.
$\\{\tilde{w}^{(1)},\ldots,\tilde{w}^{(N_{S})}\\}$, and construct an initial
sampled approximation by introducing the following sampled-polytope SAS
${\mathbb{S}}_{N_{S}}=\bigcap_{j=1}^{N_{S}}\mathbb{X}(\tilde{w}^{(j)}).$ (28)
Note that the sampled polytope ${\mathbb{S}}_{N_{S}}$, by construction, is
given by the intersection of $n_{\ell}N_{S}$ half-spaces. Hence, we observe
that this approach provides very precise control on the final complexity of
the approximation, through the choice of the number of samples $N_{S}$.
However, it is also clear that a choice for which $N_{S}<<N_{LT}$ implies that
the probabilistic properties of ${\mathbb{S}}_{N_{S}}$ before scaling will be
very bad. However, we emphasize again that this initial geometry doesn’t have
nor require any probabilistic guarantees, which are instead provided by the
probabilistic scaling discussed in Section 4.1. It should be also remarked
that this is only one possible heuristic. For instance, along this line one
could as well draw many samples and then apply a clustering algorithm to boil
it down to a desired number of samples.
We remark that, in order to apply the scaling procedure, we need to define a
center around which to apply the scaling procedure. To this end, we could
compute the so-called Chebyshev center, defined as the center of largest ball
inscribed in ${\mathbb{S}}_{N_{S}}$, i.e.
$\theta_{c}=\mathsf{Cheb}({\mathbb{S}}_{N_{S}})$. We note that computing the
Chebyshev center of a given polytope is an easy convex optimization problem,
for which efficient algorithms exist, see e.g. [43]. A possible alternative
would be the analytic center of ${\mathbb{S}}_{N_{S}}$, whose computation is
even easier (see [43] for further details). Once the center $\theta_{c}$ has
been determined, the scaling procedure can be applied to the set
${\mathbb{S}}_{N_{S}}(\gamma)\doteq\theta_{c}\oplus\gamma\\{{\mathbb{S}}_{N_{S}}\ominus\theta_{c}\\}$.
Note that the center needs to be inside $\mathbb{X}_{\varepsilon}$. Aside for
that, the choice of $\theta_{c}$ only affects the goodness of the shape, but
we can never know a priori if the analytic center is a better choice than any
random center in $\mathbb{X}_{\varepsilon}$.
(a) ${\mathbb{S}}_{N_{S}}$ with $N_{S}=100$. $\rightarrow$ $\gamma=0.8954$
(b) ${\mathbb{S}}_{N_{S}}$ with $N_{S}=1,000$. $\rightarrow$ $\gamma=1.2389$
(c) LT-based (Lemma 1). $N_{LT}=52,044$
Figure 3: (a-b) Probabilistic scaling approximations of the $\varepsilon$-CCS.
Scaling procedure applied to a sampled-polytope with $N_{S}=100$ (a) and
$N_{S}=1,000$ (b). The initial sets are depicted in red, the scaled ones in
green. (c) Approximation obtained by direct application of Lemma 1. Note that,
in this latter case, to plot the set without out-of-memory errors a pruning
procedure [44] of the $52,044$ linear inequalities was necessary.
###### Example 3 (Sample-based approximations).
To illustrate how the proposed scaling procedure works in practice in the case
of sampled-polytope SAS, we revisit Example 2. To this end, a pre-fixed number
$N_{S}$ of uncertainty samples were drawn, and the set inequalities
$F(\tilde{w}^{(j)})\theta\leq g(\tilde{w}^{(j)}),\quad j\in[N_{S}],$
with $F(w),g(w)$ defined in (7), were constructed, leading to the candidate
set ${\mathbb{S}}_{N_{S}}$. Then, the corresponding Chebyshev center was
computed, and Algorithm 1 was applied with $\varepsilon=0.05$,
$\delta=10^{-6}$, leading to $N_{\gamma}=2,120$.
We note that, in this case, the solution of the optimization problem in (26)
may be obtained by bisection on $\gamma$. Indeed, for given $\gamma$, checking
if ${\mathbb{S}}_{N_{S}}(\gamma)\subseteq\mathbb{X}(w^{(i)})$ amounts to
solving some simple linear programs.
Two different situations were considered: a case where the number of
inequalities is rather small $N_{S}=100$, and a case where the complexity of
the SAS is higher, i.e. $N_{S}=1,000$. The outcome procedure is illustrated in
Figure 3. We can observe that, for a small $N_{S}$ – Fig. 3(a) – the initial
approximation is rather large (although it is contained in
$\mathbb{X}_{\varepsilon}$, we remark that we do not have any guarantee that
this will happen). In this case, the probabilistic scaling returns
$\gamma=0.8954$ which is less than one. This means that, in order to obtain a
set fulfilling the desired probabilistic guarantees, we need to shrink it
around its center. In the second case, for a larger number of sampled
inequalities – Fig. 3(b) \- the initial set (the red one) is much smaller, and
the scaling procedure inflates the set by returning a value of $\gamma$
greater than one, i.e. $\gamma=1.2389$. Note that choosing a larger number of
samples for the computation of the initial set does not imply that the final
set will be a better approximation of the $\varepsilon$-CCS.
Finally, we compare this approach to the scenario-like ones discussed in
Subsection 3.3. To this end, we also draw the approximation obtained by
directly applying the Learning Theory bound (20). Note that in this case,
since $n_{\theta}=3$ and $n_{\ell}=4$, we need to take $N_{LT}=13,011$
samples, corresponding to $52,044$ linear inequalities. The resulting set is
represented in Fig. 3(c). We point out that using this approximation i) the
set is much more complex, since the number of involved inequalities is much
larger, ii) the set is much smaller, hence providing a much more conservative
approximation of the $\varepsilon$-CCS. Hence, the ensuing chance-constrained
optimization problem will be computationally harder, and lead to a solution
with a larger cost or even to an infeasible problem, in cases where the
approximating set is too small.
## 6 Candidate SAS: Norm-based SAS
In this section, we propose a procedure in which the shape of the scalable SAS
may be selected a-priori. This corresponds to situations where the designer
wants to have full control in the final shape in terms of structure and
complexity. The main idea is to define so-called norm-based SAS of the form
${\mathbb{S}_{p}}(\gamma)\doteq\theta_{c}\oplus\gamma H\mathbb{B}_{p}^{s}$
(29)
where $\mathbb{B}_{p}^{s}$ is a $\ell_{p}$-ball in $\mathbb{R}^{s}$,
$H\in\mathbb{R}^{n_{\theta},s}$, with $s\geq n_{\theta}$, is a design matrix
(not necessarily square), and $\gamma$ is the scaling parameter. Note that
when the matrix $H$ is square (i.e. $s=n_{\theta}$) and positive definite
these sets belong to the class of $\ell_{p}$-norm based sets originally
introduced in [45]. In particular, in case of $\ell_{2}$ norm, the sets are
ellipsoids. This particular choice is the one studied in [14]. Here, we extend
this approach to a much more general family of sets, which encompasses for
instance zonotopes, obtained by letting $p=\infty$ and $s\geq n_{\theta}$.
Zonotopes have been widely studied in geometry, and have found several
applications in systems and control, in particular for problems of state
estimation and robust Model Predictive Control, see e.g. [46].
### 6.1 Scaling factor computation for norm-bases SAS
We recall that the scaling factor $\gamma(w)$ is defined as $0$ if
$\theta_{c}\not\in\mathbb{X}(w)$ and as the largest value $\gamma$ for which
${\mathbb{S}_{p}}(\gamma)\subseteq\mathbb{X}(w)$ otherwise. The following
theorem, whose proof is reported in Appendix A.3, provides a direct and simple
way to compute in closed form the scaling factor for a given candidate norm-
based SAS.
###### Theorem 2 (Scaling factor for norm-based SAS).
Given a norm-based SAS ${\mathbb{S}}(\gamma)$ as in (29), and a realization
$w\in\mathbb{W}$, the scaling factor $\gamma(w)$ can be computed as
$\gamma(w)=\min_{\ell\in[n_{\ell}]}\;\gamma_{\ell}(w),$
with $\gamma_{\ell}(w)$, $\ell\in[n_{\ell}]$, given by
$\gamma_{\ell}(w)=\left\\{\begin{array}[]{ccl}0&\mbox{if
}&\tau_{\ell}(w)<0,\\\ \infty&\mbox{if}&\tau_{\ell}(w)\geq 0\mbox{ and
}\rho_{\ell}(w)=0,\\\
{\displaystyle{\frac{\tau_{\ell}(w)}{\rho_{\ell}(w)}}}&\mbox{if}&\tau_{\ell}(w)\geq
0\mbox{ and }\rho_{\ell}(w)>0,\end{array}\right.$ (30)
where $\tau_{\ell}(w)\doteq g_{\ell}(w)-f_{\ell}^{T}(w)\theta_{c}$ and
$\rho_{\ell}(w)\doteq\|H^{T}f_{\ell}(w)\|_{p^{*}}$, with $\|\cdot\|_{p}^{*}$
being the dual norm of $\|\cdot\|_{p}$.
Note that $\gamma(w)$ is equal to zero if and only if $\theta_{c}$ is not
included in the interior of $\mathbb{X}(w)$.
### 6.2 Construction of a candidate norm-based set
Similarly to Section 5, we first draw a fixed number $N_{S}$ of “design”
uncertainty samples $\\{\tilde{w}^{(1)},\ldots,\tilde{w}^{(N_{S})}\\},$ and
construct an initial sampled approximation by introducing the following
sampled-polytope SAS ${\mathbb{S}}_{N_{S}}$ as defined in
$\eqref{eq:sampledSAS}$. Again, we consider the Chebyshev center of
${\mathbb{S}}_{N_{S}}$, or its analytical center as a possible center
$\theta_{c}$ for our approach.
Given ${\mathbb{S}}_{N_{S}}$, $s\geq n_{\theta}$ and $p\in\\{1,2,\infty\\}$,
the objective is to compute the largest set $\theta_{c}\oplus
H\mathbb{B}^{s}_{p}$ included in ${\mathbb{S}}_{N_{S}}$. To this end, we
assume that we have a function $\mathsf{Vol}_{p}(H)$ that provides a measure
of the size of $H\mathbb{B}^{s}_{p}$. That is, larger values of
$\mathsf{Vol}_{p}(H)$ are obtained for increasing sizes of
$H\mathbb{B}^{s}_{p}$.
###### Remark 3 (On the volume function).
The function $\mathsf{Vol}_{p}(H)$ may be seen as a generalization of the
classical concept of Lebesgue volume of the set ${\mathbb{S}}_{N_{S}}$.
Indeed, when $H$ is a square positive definite matrix, some possibilities are
$\mathsf{Vol}_{p}(H)=\log\,\det(H)$ – which is directly proportional to the
classical volume definition, or $\mathsf{Vol}_{p}(H)=\rm{tr}\,H$ – which for
$p=2$ becomes the well known sum of ellipsoid semiaxes (see [47] and [43,
Chapter 8]). These measures can be easily generalized to non square matrices.
It suffices to compute the singular value decomposition. If $H=U\Sigma V^{T}$,
we could use the measures $\mathsf{Vol}_{p}(H)=\rm{tr}\,\Sigma$ or
$\mathsf{Vol}_{p}(H)=\log\,\det(\Sigma)$.
For non square matrices $H$, specific results for particular values of $p$ are
known. For example, we remind that if $p=\infty$ and
$H\in\mathbb{R}^{n_{\theta}\times s}$, $s\geq n_{\theta}$, then
$\theta_{c}\oplus H\mathbb{B}^{s}_{\infty}$ is a zonotope. Then, if we denote
as generator each of the columns of $H$, the volume of a zonotope can be
computed by means of a sum of terms (one for each different way of selecting
$n_{\theta}$ generators out of the $s$ generators of $H$); see [48], [49].
Another possible measure of the size of a zonotope $\theta_{c}\oplus
H\mathbb{B}^{s}_{\infty}$ is the Frobenious norm of $H$ [48].
Given an initial design set ${\mathbb{S}}_{N_{S}}$, we elect as our candidate
Scalable SAS the largest “volume” norm-based SAS contained in
${\mathbb{S}}_{N_{S}}$. Formally, this rewrites as the following optimization
problem
$\displaystyle\max\limits_{\theta_{c},H}~{}\mathsf{Vol}_{p}(H)$
$\displaystyle\text{subject to }\theta_{c}\oplus
H\mathbb{B}_{p}^{s}\subseteq{\mathbb{S}}_{N_{S}}$
As it has been shown, this problem is equivalent to
$\displaystyle\min\limits_{\theta_{c},H}$ $\displaystyle-\mathsf{Vol}_{p}(H)$
s.t. $\displaystyle
f_{\ell}^{T}(\tilde{w}^{(j)})\theta_{c}+\|H^{T}f_{\ell}(w^{(j)})\|_{p^{*}}-g_{\ell}(w^{(j)})\leq
0,$ $\displaystyle\qquad\qquad\qquad\ell\in[n_{\ell}],\;j\in[N_{S}],$
where we have replaced the maximization of $\mathsf{Vol}_{p}(H)$ with the
minimization of -$\mathsf{Vol}_{p}(H)$.
We notice that the constraints are convex on the decision variables; also, the
functional to minimize is convex under particular assumptions. For example
when $H$ is assumed to be square and positive definite and
$\mathsf{Vol}_{p}(H)=\log\det(H)$. For non square matrices, the constraints
remain convex, but the convexity of the functional to be minimized is often
lost. In this case, local optimization algorithms should be employed to obtain
a possibly sub-optimal solution.
(a) $\gamma=0.9701$
(b) $\gamma=1.5995$
(c) $\gamma=0.9696$
(d) $\gamma=1.5736$
Figure 4: Scaling procedure applied to (a) ${\mathbb{S}}_{1}$-SAS with
$N_{S}=100$, (b) ${\mathbb{S}}_{1}$-SAS with $N_{S}=1,000$ (b),
${\mathbb{S}}_{\infty}$-SAS with $N_{S}=100$ (c), and $\ell_{\infty}$-poly
with $N_{S}=1,000$ (d). The initial set is depicted in red, the final one in
green. The sampled design polytope ${\mathbb{S}}_{N_{S}}$ is represented in
black.
###### Example 4 (Norm-based SAS).
We revisit again Example 2 to show the use of norm-based SAS. We note that, in
this case, the designer can control the approximation outcome by acting upon
the number of design samples $N_{S}$ used for constructing the set
${\mathbb{S}}_{N_{S}}$. In Figure 4 we report two different norm-based SAS,
respectively with $p=1$ and $p=\infty$, and for each of them we consider two
different values of $N_{S}$, respectively $N_{S}=100$ and $N_{S}=1,000$.
Similarly to what observed for the sampled-polys, we see that for larger
$N_{S}$, the ensuing initial set becomes smaller. Consequently, we have an
inflating process for small $N_{S}$ and a shrinkage one for large $N_{S}$
However, we observe that in this case, the final number of inequalities is
independent on $N_{S}$, being equal to $3n_{\theta}+1=10$ for
${\mathbb{S}}_{1}$ and $2n_{\theta}$ for ${\mathbb{S}}_{\infty}$.
#### 6.2.1 Relaxed computation
It is worth remarking that that the minimization problem of the previous
subsection might be infeasible. In order to guarantee the feasibility of the
problem, a soft-constrained optimization problem is proposed. With a relaxed
formulation, $\theta_{c}$ is not guaranteed to satisfy all the sampled
constraints. However $\theta_{c}\in{\mathbb{S}}_{N_{S}}$ is not necessary to
obtain an $\varepsilon$-CSS (in many practical applications, every element of
$\Theta$ has a non zero probability of violation and ${\mathbb{S}}_{N_{S}}$ is
empty with non-zero probability). Moreover, a relaxed formulation is necessary
to address problems in which there is no element of $\Theta$ with probability
of violation equal to zero (or significantly smaller than $\varepsilon$). Not
considering the possibility of violations is an issue especially when $N_{S}$
is large, because the probability of obtaining an empty sampled set
${\mathbb{S}}_{N_{S}}$ grows with the number of samples $N_{S}$.
Given $\xi>0$ the relaxed optimization problem is
$\displaystyle\min\limits_{\theta_{c},H,\tau_{1},\ldots,\tau_{N_{S}}}~{}-\mathsf{Vol}_{p}(H)+\xi\sum\limits_{j=1}^{N_{S}}\max\\{\tau_{j},0\\}$
(31) $\displaystyle\text{s.t.
}\;f_{\ell}^{T}(w^{(j)})\theta_{c}+\|H^{T}f_{\ell}(w^{(j)})\|_{p^{*}}-g_{\ell}(w^{(j)})\leq\tau_{j},$
$\displaystyle\qquad\qquad\qquad\ell\in[n_{\ell}],\;j\in[N_{S}].$
The parameter $\xi$ serves to provide an appropriate trade off between
satisfaction of the sampled constraints and the size of the obtained region. A
possibility to choose $\xi$ would be to choose it in such a way that the
fraction of violations $n_{viol}/N_{S}$ (where $n_{viol}$ is the number of
elements $\tau_{j}$ larger than zero) is smaller than $\varepsilon/2$.
## 7 Numerical example: Probabilistic set membership estimation
We now present a numerical example in which the results of the paper are
applied to the probabilistic set membership estimation problem, introduced in
subSection 2.3. We consider the universal approximation functions given by
Gaussian radial basis function networks (RBFN) [50].
Given the nodes $[x_{1},x_{2},\ldots,x_{M}]$ and the variance parameter $c$,
the corresponding Gaussian radial basis function network is defined as
${\rm{RBFN}}(x,\theta)=\theta^{T}\varphi(x),$
where
$\theta=\left[\begin{array}[]{ccc}\theta_{1}&\ldots&\theta_{M}\end{array}\right]^{T}$
represents the weights and
$\varphi(x)=\left[\begin{array}[]{ccc}\exp\left(\frac{-\|x-x_{1}\|^{2}}{c}\right)&\ldots&\exp\left(\frac{-\|x-x_{M}\|^{2}}{c}\right)\end{array}\right]^{T}$
is the regressor function. Given $\delta\in(0,1)$ and $\varepsilon\in(0,1)$,
the objective is to obtain, with probability no smaller than $1-\delta$, an
inner approximation of the probabilistic feasible parameter set
${\mathsf{FPS}}_{\varepsilon}$, which is the set of parameters
$\theta\in\mathbb{R}^{M}$ that satisfies
$\mathsf{Pr}_{\mathbb{W}}\\{|y-\theta^{T}\varphi(x)|\leq\rho\\}\geq
1-\varepsilon,$ (32)
where $x$ is a random scalar with uniform distribution in $[-5,5]$ and
$y=\sin(3x)+\sigma,$
where $\sigma$ is a random scalar with a normal distribution with mean $5$ and
variance 1.
We use the procedure detailed in Sections 4, 5 and 6 to obtain an SAS of
${\mathsf{FPS}}_{\varepsilon}$. We have taken a grid of $M=20$ points in the
interval $[-5,5]$ to serve as nodes for the RBFN, and a variance parameter of
$c=0.15$. We have taken $N_{S}=350$ random samples $w=(x,y)$ to compute the
initial geometry, which has been chosen to be an $\ell_{\infty}$ norm-based
SAS of dimension 20 with a relaxation parameter of $\xi=1$ (see (31)). The
chosen initial geometry is $\theta_{c}\oplus H\mathbb{B}^{20}_{\infty}$, where
$H$ is constrained to be a diagonal matrix.
When the initial geometry is obtained, we scale it around its center by means
of probabilistic scaling with Algorithm 1. The number of samples required for
the scaling phase to achieve $\varepsilon=0.05$ and $\delta=10^{-6}$ is
$N_{\gamma}=2065$ and the resulting scaling factor is $\gamma=0.3803$. The
scaled geometry $\theta_{c}\oplus\gamma H\mathbb{B}^{20}_{\infty}$ is, with a
probability no smaller than $1-\delta$, an inner approximation of
${\mathsf{FPS}}_{\varepsilon}$ which we will refer to as
${\mathsf{FPS}}_{\varepsilon}^{\delta}$. Since it is a transformation of an
$\ell_{\infty}$ norm ball with a diagonal matrix $H$, we can write it as
${\mathsf{FPS}}_{\varepsilon}^{\delta}=\\{\theta:\theta^{-}\leq\theta\leq\theta^{+}\\},$
where the extreme values $\theta^{-},\theta^{+}\in\mathbb{R}^{20}$ are
represented in Figure 5 [51], along with the central value
$\theta_{c}\in\mathbb{R}^{20}$.
Figure 5: Representation of the extreme values $\theta^{+}$ and $\theta^{-}$
and the central value $\theta_{c}$ of the
${\mathsf{FPS}}_{\varepsilon}^{\delta}$.
Once the ${\mathsf{FPS}}_{\varepsilon}^{\delta}$ has been computed, we can use
its center $\theta_{c}$ to make the point estimation
$y\approx\theta_{c}^{T}\varphi(x)$. We can also obtain probabilistic upper and
lower bounds of $y$ by means of equation (32). That is, every point in
${\mathsf{FPS}}_{\varepsilon}^{\delta}$ satisfies, with confidence $1-\delta$:
$\displaystyle\mathsf{Pr}_{\mathbb{W}}\\{y\leq\theta^{T}\varphi(x)+\rho\\}\geq
1-\varepsilon,$ (33)
$\displaystyle\mathsf{Pr}_{\mathbb{W}}\\{y\geq\theta^{T}\varphi(x)-\rho\\}\geq
1-\varepsilon.$
We notice that the tightest probabilistic bounds are obtained with
$\theta^{+}$ for the lower bound and $\theta^{-}$ for the upper one. That is,
we finally obtain that, with confidence $1-\delta$:
$\displaystyle\mathsf{Pr}_{\mathbb{W}}\\{y\leq{\theta^{-}}^{T}\varphi(x)+\rho\\}\geq
1-\varepsilon,$ (34)
$\displaystyle\mathsf{Pr}_{\mathbb{W}}\\{y\geq{\theta^{+}}^{T}\varphi(x)-\rho\\}\geq
1-\varepsilon.$
Figure 6 shows the results of both the point estimation and the probabilistic
interval estimation.
Figure 6: Real values of $y$ vs central estimation (blue) and interval
prediction bounds (red).
## 8 Conclusions, extensions, and future directions
In this paper, we proposed a general approach to construct probabilistically
guaranteed inner approximations of the chance-constraint set
$\mathbb{X}_{\varepsilon}$. The approach is very general and flexible.
First, we remark that the proposed scaling approach is not limited to sets
defined by linear inequalities, but immediately extends to more general sets.
Indeed, we may consider a generic binary performance function
$\phi:\Theta\times\mathbb{W}\to\\{0,\,1\\}$ defined as 444Clearly, this
formulation encompasses the setup discussed, obtained by simply setting
$\phi(\theta,w)=\left\\{\begin{array}[]{ll}0&\text{if $F(w)\theta\leq
g(w)$}\\\ 1&\text{otherwise.}\end{array}\right.$
$\phi(\theta,q)=\left\\{\begin{array}[]{ll}0&\text{if $\theta$ meets design
specifications for $w$}\\\ 1&\text{otherwise.}\end{array}\right.$ (35)
In this case, the violation probability may be written as
$\mathsf{Viol}(\theta)\doteq\mathsf{Pr}_{\mathbb{W}}\,\\{\,\psi(\theta,w)=1\,\\}=\mathbb{E}(\theta)$,
and we can still define the set $\mathbb{X}_{\varepsilon}$ as in (5). Then,
given an initial SAS candidate, Algorithm 1 still provides a valid
approximation. However, it should be remarked that, even if we choose a “nice”
SAS as those previously introduced, the nonconvexity of $\phi$ will most
probably render step 4 of the algorithm intractable. To further elaborate on
this point, let us focus on the case when the design specification may be
expressed as a (nonlinear) inequality of the form
$\psi(\theta,q)\leq 0.$
Then, step 4 consist in solving the following nonconvex optimization problem
$\displaystyle\gamma_{i}\doteq$ $\displaystyle\arg\max\gamma$ (36)
$\displaystyle\text{s.t.}\quad{\mathbb{S}}(\gamma)\subseteq\mathbb{X}(w^{(i)})=\Bigl{\\{}\theta\in\Theta\;|\;\psi(\theta,w^{(i)})\leq
0\Bigr{\\}}.$
We note that this is general a possibly hard problem. However, there are cases
when this problem is still solvable. For instance, whenever $\psi(\theta,q)$
is a convex function of $\theta$ for fixed $w$ and the set ${\mathbb{S}}$ is
also convex, the above optimization problem may be formulated as a convex
program by application of Finsler lemma. We remark that, in such situations,
the approach proposed here is still completely viable, since all the
derivations continue to hold.
Second, we remark that the paper open the way to the design of other families
of Scaling SAS. For instance, we are currently working on using the family of
sets defined in the form of polynomial superlevel sets (PSS) proposed in [52].
## Appendix A Appendix
### A.1 Proof of Lemma 1
To prove the lemma, we first recall the following definition from [41].
###### Definition 4 ($(\alpha,k)$-Boolean Function).
The function $h:\Theta\times\mathbb{W}\to\mathbb{R}$ is an
$(\alpha,k)$-Boolean function if for fixed $w$ it can be written as an
expression consisting of Boolean operators involving $k$ polynomials
$p_{1}(\theta),p_{2}(\theta),\ldots,p_{k}(\theta),$ in the components
$\theta_{i}$, $i\in[n_{\theta}]$ and the degree with respect to $\theta_{i}$
of all these polynomials is no larger than $\alpha$.
Let us now define the binary functions
$h_{\ell}(\theta,w)\doteq\left\\{\begin{array}[]{rl}0&\mbox{ if
}f_{\ell}(w)\theta\leq g_{\ell}(w)\\\ 1&\mbox{
otherwise}\end{array}\right.,\;\ell\in[n_{\ell}].$
Introducing the function
$h(\theta,w)\doteq\max\limits_{\ell=1,\ldots,n_{\ell}}h_{\ell}(\theta,w),$ we
see that the violation probability can be alternatively written as
$\mathsf{Viol}(\theta)\doteq\mathsf{Pr}_{\mathbb{W}}\,\\{\,h(\theta,w)=1\,\\}.$
The proof immediately follows by observing that $h(\theta,w)$ is an
$(1,n_{\ell})$-Boolean function, since it can be expressed as a function of
$n_{\ell}$ Boolean functions, each of them involving a polynomial of degree 1.
Indeed, it is proven in [41, Theorem 8], that, if
$h:\Theta\times\mathbb{W}\to\mathbb{R}$ is an $(\alpha,k)$-Boolean function
then, for $\varepsilon\in(0,0.14)$, with probability greater than $1-\delta$
we have $\mathsf{Pr}_{\mathbb{W}}\,\\{\,h(\theta,w)=1\,\\}\leq\varepsilon$ if
$N$ is chosen such that
$N\geq\frac{4.1}{\varepsilon}\Big{(}\ln\frac{21.64}{\delta}+4.39n_{\theta}\,\log_{2}\Big{(}\frac{8e\alpha
k}{\varepsilon}\Big{)}\Big{)}.$
### A.2 Property 1
###### Property 1.
Given $\varepsilon\in(0,1)$, $\delta\in(0,1)$, and $0\leq r\leq N$, let $N$ be
such that $\mathbf{B}(r;N,\varepsilon)\leq\delta$. Draw $N$ iid sample-sets
$\\{\mathbb{X}^{(1)},\mathbb{X}^{(2)},\ldots,\mathbb{X}^{(N)}\\}$ from a
distribution $\mathsf{Pr}_{\mathbb{X}}$. For $i\in[N]$, let
$\gamma_{i}\doteq\gamma(\mathbb{X}^{(i)})$, with $\gamma(\cdot)$ as in
Definition 3, and suppose that $\bar{\gamma}=\gamma_{1+r:N}>0$. Then, with
probability no smaller than $1-\delta$, it holds that
$\mathsf{Pr}_{\mathbb{X}}\\{\theta_{c}\oplus\bar{\gamma}{\mathbb{S}}\not\subseteq\mathbb{X}\\}\leq\varepsilon$.
Proof: It has been proven in [38, 39] that if one discards no more than $r$
constraints on a convex problem with $N$ random constraints, then the
probability of violating the constraints with the solution obtained from the
random convex problem is no larger than $\varepsilon\in(0,1)$, with
probability no smaller than $1-\delta$, where
$\delta=\left(\begin{array}[]{c}d+r-1\\\ d-1\\\
\end{array}\right)\sum\limits_{i=0}^{d+r-1}\left(\begin{array}[]{c}N\\\ i\\\
\end{array}\right)\varepsilon^{i}(1-\varepsilon)^{N-i},$
and $d$ is the number of decision variables. We apply this result to the
following optimization problem
$\max\limits_{\gamma}\gamma\text{ subject to
}\theta_{c}\oplus\gamma{\mathbb{S}}\subseteq\mathbb{X}^{(i)},\;\;i\in[N].$
From Definition 3, we could rewrite this optimization problem as
$\max\limits_{\gamma}\gamma\text{ subject to
}\gamma\leq\gamma(\mathbb{X}^{(i)}),\;i\in[N].$
We first notice that the problem under consideration is convex and has a
unique scalar decision variable $\gamma$. That is, $d=1$. Also, the non-
degeneracy and uniqueness assumption required in the application of the
results of [38] and [39] are satisfied. Hence, if we allow $r$ violations in
the above minimization problem, we have that with probability no smaller than
$1-\delta$, where
$\delta=\left(\begin{array}[]{c}r\\\ 0\\\
\end{array}\right)\sum\limits_{i=0}^{r}\left(\begin{array}[]{c}N\\\ i\\\
\end{array}\right)\varepsilon^{i}(1-\varepsilon)^{N-i}=\mathbf{B}(r;N,\varepsilon),$
the solution $\bar{\gamma}$ of problem (A.2) satisfies
$\mathsf{Pr}_{\mathbb{X}}\\{\bar{\gamma}>\gamma(\mathbb{X})\\}\leq\varepsilon.$
We conclude from this, and Definition 3, that with probability no smaller than
$1-\delta$,
$\mathsf{Pr}_{\mathbb{X}}\\{\theta_{c}\oplus\bar{\gamma}{\mathbb{S}}\not\subseteq\mathbb{X}\\}\leq\varepsilon.$
Finally, note that the optimization problem under consideration can be solved
directly by ordering the values $\gamma_{i}=\gamma(\mathbb{X}^{(i)})$. It is
clear that if $r\geq 0$ violations are allowed, then the optimal value for
$\gamma$ is $\bar{\gamma}=\gamma_{r+1:N}$. ∎
### A.3 Proof of Theorem 2
Note that, by definition, the condition $\theta_{c}\oplus\gamma
H\mathbb{B}^{s}_{p}\subseteq\mathbb{X}(w)$ is equivalent to
$\max\limits_{z\in\mathbb{B}^{s}_{p}}f_{\ell}^{T}(w)(\theta_{c}+\gamma
Hz)-g_{\ell}(w)\leq 0,\;\ell\in[n_{\ell}].$
Equivalently, from the dual norm definition, we have
$f_{\ell}^{T}(w)\theta_{c}+\gamma\|H^{T}f_{\ell}(w)\|_{p^{*}}-g_{\ell}(w)\leq
0,\;\ell\in[n_{\ell}].$
Denote by $\gamma_{\ell}$ the scaling factor $\gamma_{\ell}$ corresponding to
the $\ell$-th constraint
$f_{\ell}^{T}(w)\theta_{c}+\gamma_{\ell}\|H^{T}f_{\ell}(w)\|_{p^{*}}-g_{\ell}(w)\leq
0.$
With the notation introduced in the Lemma, this constraint rewrites as
$\gamma_{\ell}\rho_{\ell}(w)\leq\tau_{\ell}(w).$
The result follows noting that the corresponding scaling factor
$\gamma_{\ell}(w)$ can be computed as
$\gamma_{\ell}(w)=\max_{\gamma_{\ell}\rho_{\ell}(w)\leq\tau_{\ell}(w)}\gamma_{\ell},$
and that the value for $\gamma(w)$ is obtained from the most restrictive one.
∎
## References
* [1] A. Prékopa, _Stochastic Programming_. Springer Science & Business Media, 2013.
* [2] N. V. Sahinidis, “Optimization under uncertainty: state-of-the-art and opportunities,” _Computers & Chemical Engineering_, vol. 28, no. 6-7, pp. 971–983, 2004.
* [3] A. Ben-Tal and A. Nemirovski, “Robust convex optimization,” _Mathematics of Operations Research_ , vol. 23, pp. 769–805, 1998.
* [4] G. Calafiore, F. Dabbene, and R. Tempo, “Research on probabilistic methods for control system design,” _Automatica_ , vol. 47, pp. 1279–1293, 2011.
* [5] R. Tempo, G. Calafiore, and F. Dabbene, _Randomized Algorithms for Analysis and Control of Uncertain Systems: with Applications_. Springer Science & Business Media, 2012.
* [6] M. Mammarella, E. Capello, F. Dabbene, and G. Guglieri, “Sample-based SMPC for tracking control of fixed-wing UAV,” _IEEE Control Systems Letters_ , vol. 2, no. 4, pp. 611–616, 2018.
* [7] J. Li, W. Zhan, Y. Hu, and M. Tomizuka, “Generic tracking and probabilistic prediction framework and its application in autonomous driving,” _IEEE Transactions on Intelligent Transportation Systems_ , 2019.
* [8] M. Chamanbaz, F. Dabbene, and C. Lagoa, _Algorithms for Optimal AC Power Flow in the Presence of Renewable Sources_. Wiley Encyclopedia of Electrical and Electronics Engineering, 2020, pp. 1–13.
* [9] M. Chamanbaz, F. Dabbene, and C. M. Lagoa, “Probabilistically robust AC optimal power flow,” _IEEE Transactions on Control of Network Systems_ , vol. 6, no. 3, pp. 1135–1147, 2019.
* [10] X. Geng and L. Xie, “Data-driven decision making in power systems with probabilistic guarantees: Theory and applications of chance-constrained optimization,” _Annual Reviews in Control_ , vol. 47, pp. 341–363, 2019\.
* [11] M. Lorenzen, F. Dabbene, R. Tempo, and F. Allgöwer, “Stochastic MPC with offline uncertainty sampling,” _Automatica_ , vol. 81, no. 1, pp. 176–183, 2017.
* [12] M. Mammarella, M. Lorenzen, E. Capello, H. Park, F. Dabbene, G. Guglieri, M. Romano, and F. Allgöwer, “An offline-sampling SMPC framework with application to autonomous space maneuvers,” _IEEE Transactions on Control Systems Technology_ , pp. 1–15, 2018.
* [13] T. Alamo, V. Mirasierra, F. Dabbene, and M. Lorenzen, “Safe approximations of chance constrained sets by probabilistic scaling,” in _2019 18th European Control Conference (ECC)_. IEEE, 2019, pp. 1380–1385.
* [14] M. Mammarella, T. Alamo, F. Dabbene, and M. Lorenzen, “Computationally efficient stochastic mpc: a probabilistic scaling approach,” in _Proc. of 4th IEEE Conference on Control Technology and Applications_ , 2020.
* [15] M. Ahsanullah, V. Nevzorov, and M. Shakil, _An introduction to Order Statistics_. Paris: Atlantis Press, 2013\.
* [16] B. Miller and H. Wagner, “Chance constrained programming with joint constraints,” _Operations Research_ , vol. 13, pp. 930–945, 1965.
* [17] L. Khachiyan, “The problem of calculating the volume of a polyhedron is enumerably hard,” _Russian Mathematical Surveys_ , 1989.
* [18] A. Shapiro, D. Dentcheva, and A. Ruszczyński, _Lectures on stochastic programming: modeling and theory_. SIAM, 2014.
* [19] W. van Ackooij, “Eventual convexity of chance constrained feasible sets,” _Optimization_ , vol. 64, no. 5, pp. 1263–1284, 2015.
* [20] A. Prékopa, T. Rapcsák, and I. Zsuffa, “Serially linked reservoir system design using stochastic programing,” _Water Resources Research_ , vol. 14, no. 4, 1978.
* [21] D. Dentcheva, B. Lai, and A. Ruszczyński, “Dual methods for probabilistic optimization problems*,” _Mathematical Methods of Operations Research_ , vol. 60, no. 2, pp. 331–346, 2004.
* [22] M. Lorenzen, F. Dabbene, R. Tempo, and F. Allgöwer, “Constraint-tightening and stability in stochastic model predictive control,” _IEEE Transactions on Automatic Control_ , vol. 62, no. 7, pp. 3165–3177, 2017.
* [23] A. Vicino and G. Zappa, “Sequential approximation of feasible parameter sets for identification with set membership uncertainty,” _IEEE Transactions on Automatic Control_ , vol. 41, no. 6, pp. 774–785, 1996.
* [24] J. M. Bravo, T. Alamo, and E. F. Camacho, “Bounded error identification of systems with time-varying parameters,” _IEEE Transactions on Automatic Control_ , vol. 51, no. 7, pp. 1144–1150, 2006.
* [25] S. Kataoka, “A stochastic programming model,” _Econometrica: Journal of the Econometric Society_ , pp. 181–196, 1963.
* [26] A. Prékopa, “Logarithmic concave measures with application to stochastic programming,” _Acta Scientiarum Mathematicarum_ , pp. 301–316, 1971.
* [27] C. M. Lagoa, “On the convexity of probabilistically constrained linear programs,” in _Proceedings of the 38th IEEE Conference on Decision and Control (Cat. No.99CH36304)_ , vol. 1, 1999, pp. 516–521 vol.1.
* [28] G. C. Calafiore and L. E. Ghaoui, “On distributionally robust chance-constrained linear programs,” _Journal of Optimization Theory and Applications_ , vol. 130, no. 1, pp. 1–22, 2006.
* [29] R. Henrion and C. Strugarek, “Convexity of chance constraints with independent random variables,” _Computational Optimization and Applications_ , vol. 41, no. 2, pp. 263–276, 2008.
* [30] L. Hewing and M. N. Zeilinger, “Stochastic model predictive control for linear systems using probabilistic reachable sets,” in _2018 IEEE Conference on Decision and Control (CDC)_ , 2018, pp. 5182–5188.
* [31] S. Yan, P. Goulart, and M. Cannon, “Stochastic model predictive control with discounted probabilistic constraints,” in _2018 European Control Conference (ECC)_. IEEE, 2018, pp. 1003–1008.
* [32] A. Nemirovski and A. Shapiro, “Convex approximations of chance constrained programs,” _SIAM Journal on Optimization_ , vol. 17, no. 4, pp. 969–996, 2006.
* [33] W. Chen, M. Sim, J. Sun, and C.-P. Teo, “From CVaR to uncertainty set: Implications in joint chance-constrained optimization,” _Operations Research_ , vol. 58, no. 2, pp. 470–485, 2010.
* [34] A. Jasour, N. S. Aybat, and C. M. Lagoa, “Semidefinite programming for chance constrained optimization over semialgebraic sets,” _SIAM Journal on Optimization_ , vol. 25, no. 3, pp. 1411–1440, 2015.
* [35] J. B. Lasserre, “Representation of chance-constraints with strong asymptotic guarantees,” _IEEE Control Systems Letters_ , vol. 1, no. 1, pp. 50–55, 2017\.
* [36] G. Calafiore and M. Campi, “The scenario approach to robust control design,” _IEEE Transactions on Automatic Control_ , vol. 51, no. 5, pp. 742–753, 2006\.
* [37] M. Campi and S. Garatti, “The exact feasibility of randomized solutions of robust convex programs,” _SIAM Journal of Optimization_ , vol. 19, pp. 1211—1230, 2008.
* [38] G. Calafiore, “Random convex programs,” _SIAM Journal of Optimization_ , vol. 20, pp. 3427–3464, 2010.
* [39] M. Campi and S. Garatti, “A sampling-and-discarding approach to chance-constrained optimization: feasibility and optimality,” _Journal of Optimization Theory and Applications_ , vol. 148, pp. 257–280, 2011.
* [40] V. Vapnik, _Statistical Learning Theory_. New York: John Wiley and Sons, 1998.
* [41] T. Alamo, R. Tempo, and E. F. Camacho, “Randomized strategies for probabilistic solutions of uncertain feasibility and optimization problems,” _IEEE Transactions on Automatic Control_ , vol. 54, no. 11, pp. 2545–2559, 2009.
* [42] T. Alamo, R. Tempo, A. Luque, and D. Ramirez, “Randomized methods for design of uncertain systems: Sample complexity and sequential algorithms,” _Automatica_ , vol. 52, pp. 160–172, 2015.
* [43] S. Boyd and L. Vandenberghe, _Convex Optimization_. Cambridge University Press, 2004.
* [44] M. Herceg, M. Kvasnica, C. N. Jones, and M. Morari, “Multi-parametric toolbox 3.0,” in _2013 European control conference (ECC)_. IEEE, 2013, pp. 502–510.
* [45] F. Dabbene, C. Lagoa, and P. Shcherbakov, “On the complexity of randomized approximations of nonconvex sets,” in _2010 IEEE International Symposium on Computer-Aided Control System Design_. IEEE, 2010, pp. 1564–1569.
* [46] V. T. H. Le, C. Stoica, T. Alamo, E. F. Camacho, and D. Dumur, _Zonotopes: From Guaranteed State-estimation to Control_. Wiley, 2013.
* [47] F. Dabbene, D. Henrion, C. Lagoa, and P. Shcherbakov, “Randomized approximations of the image set of nonlinear mappings with applications to filtering,” _IFAC-PapersOnLine_ , vol. 48, no. 14, pp. 37–42, 2015.
* [48] T. Alamo, J. M. Bravo, and E. F. Camacho, “Guaranteed state estimation by zonotopes,” _Automatica_ , vol. 41, no. 6, pp. 1035–1043, 2005.
* [49] E. Gover and N. Krikorian, “Determinants and the volumes of parallelotopes and zonotopes,” _Linear Algebra and its Applications_ , vol. 433, no. 1, pp. 28–40, 2010.
* [50] M. D. Buhmann, “Radial basis functions,” _Acta numerica_ , vol. 9, pp. 1–38, 2000.
* [51] L. J, “Plotrix: a package in the red light district of r,” _R-News_ , vol. 6, no. 4, pp. 8–12, 2006.
* [52] F. Dabbene, D. Henrion, and C. M. Lagoa, “Simple approximations of semialgebraic sets and their applications to control,” _Automatica_ , vol. 78, pp. 110 – 118, 2017.
|
"# Structure Of Flavor Changing Goldstone Boson Interactions\n\nJin<EMAIL_ADDRESS>Yu<EMAIL_ADDRESS>X(...TRUNCATED) |
"Also at ]Institute for Biomedical Engineering and Informatics, TU Ilmenau,\nGermany\n\n# Mean-field(...TRUNCATED) |
"# Best approximations, distance formulas and orthogonality in $C^{*}$-algebras\n\nPriyanka Grover (...TRUNCATED) |
"# Heating up decision boundaries: \nisocapacitory saturation, adversarial \nscenarios and general(...TRUNCATED) |
"# Gluon Correlation Functions from Lattice Quantum Chromodynamics\n\nGuilherme Telo Rodrigues Catum(...TRUNCATED) |
"# Frustration, strain and phase co-existence in the mixed valent hexagonal\niridate Ba3NaIr2O9\n\nC(...TRUNCATED) |
"Some remarks on the discovery of 244Md\n\nFritz Peter Heßberger1,2,***E-mail<EMAIL_ADDRESS>\n1GSI (...TRUNCATED) |
"# Ergodicity and totality of partitions associated with the RSK correspondence\n\nA. M. Vershik St.(...TRUNCATED) |
End of preview. Expand
in Data Studio
No dataset card yet
- Downloads last month
- 200