Skip to content

Commit

Permalink
Rebuilt site
Browse files Browse the repository at this point in the history
  • Loading branch information
evansuva committed Nov 23, 2019
1 parent 4172db9 commit f32b6bb
Show file tree
Hide file tree
Showing 17 changed files with 431 additions and 27 deletions.
2 changes: 1 addition & 1 deletion content/concentration.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ title = "Empirically Measuring Concentration"
Recent theoretical results, starting with Gilmer et al.'s
[_Adversarial Spheres_](https://aipavilion.github.io/) (2018), show
that if inputs are drawn from a concentrated metric probability space,
then adversarial examples with small perturbation are inevitable. The
then adversarial examples with small perturbation are inevitable.c The
key insight from this line of research is that [_concentration of
measure_](https://en.wikipedia.org/wiki/Concentration_of_measure">)
gives lower bound on adversarial risk for a large collection of
Expand Down
39 changes: 39 additions & 0 deletions content/costsensitive.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
+++
title = "Cost-Sensitive Robustness"
+++

Several recent works have developed methods for training classifiers
that are certifiably robust against norm-bounded adversarial
perturbations. However, these methods assume that all the adversarial
transformations provide equal value for adversaries, which is seldom
the case in real-world applications.

We advocate for cost-sensitive robustness as the criteria for
measuring the classifier's performance for specific tasks. We encode
the potential harm of different adversarial transformations in a cost
matrix, and propose a general objective function to adapt the robust
training method of Wong & Kolter (2018) to optimize for cost-sensitive
robustness. Our experiments on simple MNIST and CIFAR10 models and a
variety of cost matrices show that the proposed approach can produce
models with substantially reduced cost-sensitive robust error, while
maintaining classification accuracy.

<center>
<img src="/images/protecteven.png" width="70%">
<div class="caption" align="left" style="padding-left:5rem;padding-right:5rem">
This shows the results of cost-sensitive robustness training to protect the odd classes. By incorporating a cost matrix in the loss function for robustness training, we can produce a model where selected transitions are more robust to adversarial transformation.
</center>

<center>
<a href="/docs/cost-sensitive-poster.pdf"><img src="/images/cost-sensitive-poster-small.png" width="90%" align="center"></a>
</center>

### Paper

Xiao Zhang and David Evans. [_Cost-Sensitive Robustness against Adversarial Examples_](/docs/cost-sensitive-robustness.pdf). In <a
href="https://iclr.cc/Conferences/2019"><em>Seventh International Conference on Learning Representations</em></a> (ICLR). New Orleans. May 2019. [<a href="https://arxiv.org/abs/1810.09225">arXiv</a>] [<a
href="https://openreview.net/forum?id=BygANhA9tQ">OpenReview</a>] [<a href="/docs/cost-sensitive-robustness.pdf">PDF</a>]

### Code

[_https://github.com/xiaozhanguva/Cost-Sensitive-Robustness_](https://github.com/xiaozhanguva/Cost-Sensitive-Robustness)
Binary file added content/docs/cost-sensitive-poster.pdf
Binary file not shown.
Binary file added content/images/cost-sensitive-cifar.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added content/images/cost-sensitive-poster-small.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added content/images/protecteven.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
42 changes: 36 additions & 6 deletions content/main.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ and sophisticated adversaries.
<section style="display: table;width: 100%">
<header style="display: table-row; padding: 0.5rem">
<div style="display: table-cell; padding: 0.5rem; color:#FFFFFF;background:#663399;text-align: center;width: 49%">
<a href="/gpevasion" class="hlink">Genetic&nbsp;Programming</a>
<a href="/concentration" class="hlink">Measuring Concentration</a>
</div>
<div style="display: table-cell; padding: 0.5rem;color:#000000;background: #FFFFFF;text-align: center; width:2%""></div>
<div style="display: table-cell; padding: 0.5rem;color:#FFFFFF;background: #2c0f52;text-align: center;">
Expand All @@ -40,15 +40,45 @@ and sophisticated adversaries.
</header>
<div style="display: table-row;">
<div style="display: table-cell;">
<a href="/gpevasion"><img src="/images/geneticsearch.png" alt="Genetic Search" width="100%" align="center"></a><br>
Evolutionary framework to automatically find variants that preserve malicious behavior but evade a target classifier.
<a href="/concentration"><img src="/images/concentration/alg.png" alt="Empirically Measuring Concentration" width="100%" align="center"></a><br>
Method to empirically
measure concentration of real datasets, finding that it does not
explain the lack of robustness of state-of-the-art models.<br></br>
</div>

<div style="display: table-cell;"></div>
<div style="display: table-cell;text-align:center">

<div style="display: table-cell;text-align:left;">
<a href="/squeezing"><img src="/images/squeezing.png" alt="Feature Squeezing" width="100%" align="center"></a><br>
Reducing the search space for adversaries by coalescing inputs.<br>
<font size="-1" style="color:#666;">(The top row shows L<sub>0</sub> adversarial examples, squeezed by median smoothing.)</font>
Reduce search space for adversaries by coalescing inputs. <font style="color:#666;line-spacing:0.5;" size="-1">(Top row shows $\ell_0$ adversarial examples, squeezed by median smoothing.)</font>
</div>

</div>


<header style="display: table-row; padding: 0.5rem">
<div style="display: table-cell; padding: 0.5rem;color:#FFFFFF;background: #2c0f52;text-align: center;">
<a href="/costsensitive" class="hlink">Cost-Sensitive Robustness</a>
</div>
<div style="display: table-cell; padding: 0.5rem;color:#000000;background: #FFFFFF;text-align: center; width:2%""></div>
<div style="display: table-cell; padding: 0.5rem; color:#FFFFFF;background:#663399;text-align: center;width: 49%">
<a href="/gpevasion" class="hlink">Genetic&nbsp;Programming</a>
</div>

</header>
<div style="display: table-row;">

<div style="display: table-cell;padding:0">
<center><a href="/costsensitive"><img src="/images/cost-sensitive-cifar.png" alt="Cost-Senstivie Robustness" width="80%" align="center"></a></center>
<!--Focus robust training on transitions given by a cost matrix to make security-critical transitions robust.-->
</div>

<div style="display: table-cell;"></div>
<div style="display: table-cell;">
<a href="/gpevasion"><img src="/images/geneticsearch.png" alt="Genetic Search" width="100%" align="center"></a><br>
Evolutionary framework to automatically find variants that preserve malicious behavior but evade a target classifier.
</div>

</div>
</section>

Expand Down
2 changes: 1 addition & 1 deletion public/concentration/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -134,7 +134,7 @@ <h2 id="estimating-the-intrinsic-robustness-for-image-benchmarks">Estimating the
<p>Recent theoretical results, starting with Gilmer et al.&rsquo;s
<a href="https://aipavilion.github.io/"><em>Adversarial Spheres</em></a> (2018), show
that if inputs are drawn from a concentrated metric probability space,
then adversarial examples with small perturbation are inevitable. The
then adversarial examples with small perturbation are inevitable.c The
key insight from this line of research is that <a href="https://en.wikipedia.org/wiki/Concentration_of_measure&quot;"><em>concentration of
measure</em></a>
gives lower bound on adversarial risk for a large collection of
Expand Down
193 changes: 193 additions & 0 deletions public/costsensitive/index.html
Original file line number Diff line number Diff line change
@@ -0,0 +1,193 @@
<!DOCTYPE html>
<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en-us">
<head>
<title>
Cost-Sensitive Robustness // EvadeML
</title>

<link href="http://gmpg.org/xfn/11" rel="profile">
<meta http-equiv="content-type" content="text/html; charset=utf-8">


<meta name="viewport" content="width=device-width, initial-scale=1.0, maximum-scale=1">

<meta name="description" content="">
<meta name="keywords" content="">
<meta name="author" content="">
<meta name="generator" content="Hugo 0.17" />

<meta property="og:title" content="Cost-Sensitive Robustness" />
<meta property="og:description" content="" />
<meta property="og:type" content="website" />
<meta property="og:locale" content="en_US" />
<meta property="og:url" content="//evademl.org/costsensitive/" />



<link rel="stylesheet" href="//cdnjs.cloudflare.com/ajax/libs/pure/0.5.0/base-min.css">
<link rel="stylesheet" href="//cdnjs.cloudflare.com/ajax/libs/pure/0.5.0/pure-min.css">


<link rel="stylesheet" href="//cdnjs.cloudflare.com/ajax/libs/pure/0.5.0/grids-responsive-min.css">



<link rel="stylesheet" href="//evademl.org/css/srg.css">
<link href="//maxcdn.bootstrapcdn.com/font-awesome/4.2.0/css/font-awesome.min.css" rel="stylesheet">
<link href='//fonts.googleapis.com/css?family=Open+Sans:400,400italic,200,100,700,300,500,600,800' rel='stylesheet' type='text/css'>
<link href='//fonts.googleapis.com/css?family=Libre+Baskerville:400,700,400italic' rel='stylesheet' type='text/css'>


<link rel="apple-touch-icon-precomposed" sizes="144x144" href="/rotunda.png">
<link rel="shortcut icon" href="/rotunda.png">


<link href="" rel="alternate" type="application/rss+xml" title="EvadeML" />

<script type="text/x-mathjax-config">
MathJax.Hub.Config({
extensions: ["tex2jax.js"],
jax: ["input/TeX", "output/HTML-CSS"],
tex2jax: {
inlineMath: [ ['$','$'], ["\\(","\\)"] ],
displayMath: [ ['$$','$$'], ["\[","\]"], ["\\[","\\]"] ],
processEscapes: true
},
messageStyle: "none",
"HTML-CSS": { availableFonts: ["TeX"] }
});
</script>
<script type="text/javascript" src="https://cdn.mathjax.org/mathjax/latest/MathJax.js">
</script>




<link rel="stylesheet" href="//cdnjs.cloudflare.com/ajax/libs/highlight.js/8.4/styles/tomorrow-night-bright.min.css">

<script src="//cdnjs.cloudflare.com/ajax/libs/highlight.js/8.4/highlight.min.js"></script>
<script>hljs.initHighlightingOnLoad();</script>







</head>

<body>


<div id="layout" class="pure-g">
<div class="sidebar pure-u-1 pure-u-md-1-4">
<div class="header">
<p class="brand-group">

<a href="https://www.cs.virginia.edu/yanjun/gQdata.htm">Maching Learning Group</a><br>
and <a href="http://www.jeffersonswheel.org">Security Research Group</a><br>
<a href="http://www.cs.virginia.edu">University of Virginia</a>
</p>



<a href="//evademl.org"><h1 class="brand-title">EvadeML</h1></a>
<p class="brand-tagline">Machine Learning in the Presence of Adversaries</p>





</div>
</div>




<div class="content pure-u-1 pure-u-md-3-4">
<a name="top"></a>



<section class="post">
<h1 class="post-title">
<a href="/costsensitive/">Cost-Sensitive Robustness</a>
</h1>
<h3 class="post-subtitle">

</h3>













<p>Several recent works have developed methods for training classifiers
that are certifiably robust against norm-bounded adversarial
perturbations. However, these methods assume that all the adversarial
transformations provide equal value for adversaries, which is seldom
the case in real-world applications.</p>

<p>We advocate for cost-sensitive robustness as the criteria for
measuring the classifier&rsquo;s performance for specific tasks. We encode
the potential harm of different adversarial transformations in a cost
matrix, and propose a general objective function to adapt the robust
training method of Wong &amp; Kolter (2018) to optimize for cost-sensitive
robustness. Our experiments on simple MNIST and CIFAR10 models and a
variety of cost matrices show that the proposed approach can produce
models with substantially reduced cost-sensitive robust error, while
maintaining classification accuracy.</p>

<p><center>
<img src="/images/protecteven.png" width="70%">
<div class="caption" align="left" style="padding-left:5rem;padding-right:5rem">
This shows the results of cost-sensitive robustness training to protect the odd classes. By incorporating a cost matrix in the loss function for robustness training, we can produce a model where selected transitions are more robust to adversarial transformation.
</center></p>

<p><center>
<a href="/docs/cost-sensitive-poster.pdf"><img src="/images/cost-sensitive-poster-small.png" width="90%" align="center"></a>
</center></p>

<h3 id="paper">Paper</h3>

<p>Xiao Zhang and David Evans. <a href="/docs/cost-sensitive-robustness.pdf"><em>Cost-Sensitive Robustness against Adversarial Examples</em></a>. In <a
href="https://iclr.cc/Conferences/2019"><em>Seventh International Conference on Learning Representations</em></a> (ICLR). New Orleans. May 2019. [<a href="https://arxiv.org/abs/1810.09225">arXiv</a>] [<a
href="https://openreview.net/forum?id=BygANhA9tQ">OpenReview</a>] [<a href="/docs/cost-sensitive-robustness.pdf">PDF</a>]</p>

<h3 id="code">Code</h3>

<p><a href="https://github.com/xiaozhanguva/Cost-Sensitive-Robustness"><em>https://github.com/xiaozhanguva/Cost-Sensitive-Robustness</em></a></p>






</section>




<div class="footer">
<hr class="thin" />


<p></p>
</div>

</div>
</div>





</body>
</html>
Binary file added public/docs/cost-sensitive-poster.pdf
Binary file not shown.
Binary file added public/images/cost-sensitive-cifar.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added public/images/cost-sensitive-poster-small.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added public/images/protecteven.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
44 changes: 38 additions & 6 deletions public/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -116,6 +116,8 @@







<h1 id="is-robust-machine-learning-possible">Is Robust Machine Learning Possible?</h1>
Expand Down Expand Up @@ -146,7 +148,7 @@ <h2 id="projects">Projects</h2>
<section style="display: table;width: 100%">
<header style="display: table-row; padding: 0.5rem">
<div style="display: table-cell; padding: 0.5rem; color:#FFFFFF;background:#663399;text-align: center;width: 49%">
<a href="/gpevasion" class="hlink">Genetic&nbsp;Programming</a>
<a href="/concentration" class="hlink">Measuring Concentration</a>
</div>
<div style="display: table-cell; padding: 0.5rem;color:#000000;background: #FFFFFF;text-align: center; width:2%""></div>
<div style="display: table-cell; padding: 0.5rem;color:#FFFFFF;background: #2c0f52;text-align: center;">
Expand All @@ -155,15 +157,45 @@ <h2 id="projects">Projects</h2>
</header>
<div style="display: table-row;">
<div style="display: table-cell;">
<a href="/gpevasion"><img src="/images/geneticsearch.png" alt="Genetic Search" width="100%" align="center"></a><br>
Evolutionary framework to automatically find variants that preserve malicious behavior but evade a target classifier.
<a href="/concentration"><img src="/images/concentration/alg.png" alt="Empirically Measuring Concentration" width="100%" align="center"></a><br>
Method to empirically
measure concentration of real datasets, finding that it does not
explain the lack of robustness of state-of-the-art models.<br></br>
</div>

<div style="display: table-cell;"></div>
<div style="display: table-cell;text-align:center">

<div style="display: table-cell;text-align:left;">
<a href="/squeezing"><img src="/images/squeezing.png" alt="Feature Squeezing" width="100%" align="center"></a><br>
Reducing the search space for adversaries by coalescing inputs.<br>
<font size="-1" style="color:#666;">(The top row shows L<sub>0</sub> adversarial examples, squeezed by median smoothing.)</font>
Reduce search space for adversaries by coalescing inputs. <font style="color:#666;line-spacing:0.5;" size="-1">(Top row shows $\ell_0$ adversarial examples, squeezed by median smoothing.)</font>
</div>

</div>


<header style="display: table-row; padding: 0.5rem">
<div style="display: table-cell; padding: 0.5rem;color:#FFFFFF;background: #2c0f52;text-align: center;">
<a href="/costsensitive" class="hlink">Cost-Sensitive Robustness</a>
</div>
<div style="display: table-cell; padding: 0.5rem;color:#000000;background: #FFFFFF;text-align: center; width:2%""></div>
<div style="display: table-cell; padding: 0.5rem; color:#FFFFFF;background:#663399;text-align: center;width: 49%">
<a href="/gpevasion" class="hlink">Genetic&nbsp;Programming</a>
</div>

</header>
<div style="display: table-row;">

<div style="display: table-cell;padding:0">
<center><a href="/costsensitive"><img src="/images/cost-sensitive-cifar.png" alt="Cost-Senstivie Robustness" width="80%" align="center"></a></center>
<!--Focus robust training on transitions given by a cost matrix to make security-critical transitions robust.-->
</div>

<div style="display: table-cell;"></div>
<div style="display: table-cell;">
<a href="/gpevasion"><img src="/images/geneticsearch.png" alt="Genetic Search" width="100%" align="center"></a><br>
Evolutionary framework to automatically find variants that preserve malicious behavior but evade a target classifier.
</div>

</div>
</section>

Expand Down
Loading

0 comments on commit f32b6bb

Please sign in to comment.