Skip to content

Commit

Permalink
Rebuilt site
Browse files Browse the repository at this point in the history
  • Loading branch information
evansuva committed Dec 30, 2018
1 parent 0c0bc49 commit 51f7c01
Show file tree
Hide file tree
Showing 5 changed files with 92 additions and 3 deletions.
21 changes: 21 additions & 0 deletions content/#papers.md#
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
+++
title = "Papers"
+++

Xiao Zhang and David Evans. [_Cost-Sensitive Robustness against Adversarial Examples_](https://arxiv.org/abs/1810.09225). In <a
href="https://iclr.cc/Conferences/2019"><em>Seventh International Conference on Learning Representations</em></a> (ICLR). New Orleans. May 2019. [<a href="https://arxiv.org/abs/1810.09225">arXiv</a>] [<a
href="https://openreview.net/forum?id=BygANhA9tQ">OpenReview</a>] [<a href="https://arxiv.org/pdf/1810.09225.pdf">PDF</a>]

Weilin Xu, David Evans, Yanjun Qi. [_Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks_](/docs/featuresqueezing.pdf).
[_2018 Network and Distributed System Security Symposium_](https://www.ndss-symposium.org/ndss2018/). 18-21 February, San Diego, California. Full paper (15 pages): [[PDF](/docs/featuresqueezing.pdf)]

Qixue Xiao, Kang Li, Deyue Zhang, and Weilin Xu. [_Security Risks in Deep Learning Implementations_](https://arxiv.org/abs/1711.11008). <a href="https://www.ieee-security.org/TC/SPW2018/DLS/#"><em>1st Deep Learning and Security Workshop</em></a> (co-located with the 39th <em>IEEE Symposium on Security and Privacy</em>). San Francisco, California. 24 May 2018. [[PDF](https://arxiv.org/pdf/1711.11008.pdf)]

Weilin Xu, David Evans, Yanjun Qi. [_Feature Squeezing Mitigates and Detects
Carlini/Wagner Adversarial Examples_](https://arxiv.org/abs/1705.10686). arXiv preprint, 30 May 2017. [[PDF](https://arxiv.org/pdf/1705.10686.pdf), 3 pages]

Ji Gao, Beilun Wang, Zeming Lin, Weilin Xu, Yanjun Qi. [_DeepCloak: Masking Deep Neural Network Models for Robustness against Adversarial Samples_](https://arxiv.org/abs/1702.06763). ICLR Workshops, 24-26 April 2017. [[PDF](https://arxiv.org/pdf/1702.06763.pdf)]

Weilin Xu, Yanjun Qi, and David Evans. [_Automatically Evading
Classifiers A Case Study on PDF Malware Classifiers_](/docs/evademl.pdf). [_Network and Distributed Systems Symposium 2016_](https://www.internetsociety.org/events/ndss-symposium-2016), 21-24 February 2016, San Diego, California. Full paper (15 pages): [[PDF](/docs/evademl.pdf)]

17 changes: 17 additions & 0 deletions content/main.md
Original file line number Diff line number Diff line change
Expand Up @@ -54,6 +54,10 @@ Reducing the search space for adversaries by coalescing inputs.<br>

## Papers

Xiao Zhang and David Evans. [_Cost-Sensitive Robustness against Adversarial Examples_](https://arxiv.org/abs/1810.09225). In <a
href="https://iclr.cc/Conferences/2019"><em>Seventh International Conference on Learning Representations</em></a> (ICLR). New Orleans. May 2019. [<a href="https://arxiv.org/abs/1810.09225">arXiv</a>] [<a
href="https://openreview.net/forum?id=BygANhA9tQ">OpenReview</a>] [<a href="https://arxiv.org/pdf/1810.09225.pdf">PDF</a>]

Weilin Xu, David Evans, Yanjun Qi. [_Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks_](/docs/featuresqueezing.pdf).
[_2018 Network and Distributed System Security Symposium_](https://www.ndss-symposium.org/ndss2018/). 18-21 February, San Diego, California. Full paper (15 pages): [[PDF](/docs/featuresqueezing.pdf)]

Expand All @@ -64,6 +68,19 @@ Classifiers A Case Study on PDF Malware Classifiers_](/docs/evademl.pdf). [_Net

## Talks

<p>
<a href="https://vid.umd.edu/detsmediasite/Play/e8009558850944bfb2cac477f8d741711d?catalog=74740199-303c-49a2-9025-2dee0a195650"\
><b>Can
Machine Learning Ever Be Trustworthy?</b></a>. University of Maryland, <a href="https://ece.umd.edu/events/distinguished-colloquium-series">Booz
Allen Hamilton Distinguished Colloquium</a>. 7&nbsp;December
2018. [<a href="https://speakerdeck.com/evansuva/can-machine-learning-ever-be-trustworthy">SpeakerDeck</a>]
[<a href="https://vid.umd.edu/detsmediasite/Play/e8009558850944bfb2cac477f8d741711d?catalog=74740199-303c-49a2-9025-2dee0a195650">Video</a>]
</p>
<p>
<a href="https://speakerdeck.com/evansuva/mutually-assured-destruction-and-the-impending-ai-apocalypse"><b>Mutually
Assured Destruction and the Impending AI Apocalypse</b></a>. Opening keynote, <a href="https://www.usenix.org/conference/woot18">12<sup>th</sup> USENIX Workshop on Offensive Technologies</a> 2018. (Co-located with <em>USENIX Security Symposium</em>.) Baltimore, Maryland. 13 August 2018. [<a href="https://speakerdeck.com/evansuva/mutually-assured-destruction-and-the-impending-ai-apocalypse">SpeakerDeck</a>]
</p>
<p>
<center>
<iframe width="640" height="360" src="https://www.youtube-nocookie.com/embed/sFhD6ABghf8?rel=0" frameborder="0" allow="autoplay; encrypted-media" allowfullscreen></iframe><br>
</p>
Expand Down
19 changes: 18 additions & 1 deletion public/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -150,6 +150,10 @@ <h2 id="projects">Projects</h2>

<h2 id="papers">Papers</h2>

<p>Xiao Zhang and David Evans. <a href="https://arxiv.org/abs/1810.09225"><em>Cost-Sensitive Robustness against Adversarial Examples</em></a>. In <a
href="https://iclr.cc/Conferences/2019"><em>Seventh International Conference on Learning Representations</em></a> (ICLR). New Orleans. May 2019. [<a href="https://arxiv.org/abs/1810.09225">arXiv</a>] [<a
href="https://openreview.net/forum?id=BygANhA9tQ">OpenReview</a>] [<a href="https://arxiv.org/pdf/1810.09225.pdf">PDF</a>]</p>

<p>Weilin Xu, David Evans, Yanjun Qi. <a href="/docs/featuresqueezing.pdf"><em>Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks</em></a>.
<a href="https://www.ndss-symposium.org/ndss2018/"><em>2018 Network and Distributed System Security Symposium</em></a>. 18-21 February, San Diego, California. Full paper (15 pages): [<a href="/docs/featuresqueezing.pdf">PDF</a>]</p>

Expand All @@ -160,7 +164,20 @@ <h2 id="papers">Papers</h2>

<h2 id="talks">Talks</h2>

<p><center>
<p><p>
<a href="https://vid.umd.edu/detsmediasite/Play/e8009558850944bfb2cac477f8d741711d?catalog=74740199-303c-49a2-9025-2dee0a195650"\
><b>Can
Machine Learning Ever Be Trustworthy?</b></a>. University of Maryland, <a href="https://ece.umd.edu/events/distinguished-colloquium-series">Booz
Allen Hamilton Distinguished Colloquium</a>. 7&nbsp;December
2018. [<a href="https://speakerdeck.com/evansuva/can-machine-learning-ever-be-trustworthy">SpeakerDeck</a>]
[<a href="https://vid.umd.edu/detsmediasite/Play/e8009558850944bfb2cac477f8d741711d?catalog=74740199-303c-49a2-9025-2dee0a195650">Video</a>]
</p>
<p>
<a href="https://speakerdeck.com/evansuva/mutually-assured-destruction-and-the-impending-ai-apocalypse"><b>Mutually
Assured Destruction and the Impending AI Apocalypse</b></a>. Opening keynote, <a href="https://www.usenix.org/conference/woot18">12<sup>th</sup> USENIX Workshop on Offensive Technologies</a> 2018. (Co-located with <em>USENIX Security Symposium</em>.) Baltimore, Maryland. 13 August 2018. [<a href="https://speakerdeck.com/evansuva/mutually-assured-destruction-and-the-impending-ai-apocalypse">SpeakerDeck</a>]
</p>
<p>
<center>
<iframe width="640" height="360" src="https://www.youtube-nocookie.com/embed/sFhD6ABghf8?rel=0" frameborder="0" allow="autoplay; encrypted-media" allowfullscreen></iframe><br>
</p>
</center><br></p>
Expand Down
19 changes: 18 additions & 1 deletion public/index.xml
Original file line number Diff line number Diff line change
Expand Up @@ -161,6 +161,10 @@ Reducing the search space for adversaries by coalescing inputs.&lt;br&gt;

&lt;h2 id=&#34;papers&#34;&gt;Papers&lt;/h2&gt;

&lt;p&gt;Xiao Zhang and David Evans. &lt;a href=&#34;https://arxiv.org/abs/1810.09225&#34;&gt;&lt;em&gt;Cost-Sensitive Robustness against Adversarial Examples&lt;/em&gt;&lt;/a&gt;. In &lt;a
href=&#34;https://iclr.cc/Conferences/2019&#34;&gt;&lt;em&gt;Seventh International Conference on Learning Representations&lt;/em&gt;&lt;/a&gt; (ICLR). New Orleans. May 2019. [&lt;a href=&#34;https://arxiv.org/abs/1810.09225&#34;&gt;arXiv&lt;/a&gt;] [&lt;a
href=&#34;https://openreview.net/forum?id=BygANhA9tQ&#34;&gt;OpenReview&lt;/a&gt;] [&lt;a href=&#34;https://arxiv.org/pdf/1810.09225.pdf&#34;&gt;PDF&lt;/a&gt;]&lt;/p&gt;

&lt;p&gt;Weilin Xu, David Evans, Yanjun Qi. &lt;a href=&#34;//evademl.org/docs/featuresqueezing.pdf&#34;&gt;&lt;em&gt;Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks&lt;/em&gt;&lt;/a&gt;.
&lt;a href=&#34;https://www.ndss-symposium.org/ndss2018/&#34;&gt;&lt;em&gt;2018 Network and Distributed System Security Symposium&lt;/em&gt;&lt;/a&gt;. 18-21 February, San Diego, California. Full paper (15 pages): [&lt;a href=&#34;//evademl.org/docs/featuresqueezing.pdf&#34;&gt;PDF&lt;/a&gt;]&lt;/p&gt;

Expand All @@ -171,7 +175,20 @@ Classifiers A Case Study on PDF Malware Classifiers&lt;/em&gt;&lt;/a&gt;. &lt;a

&lt;h2 id=&#34;talks&#34;&gt;Talks&lt;/h2&gt;

&lt;p&gt;&lt;center&gt;
&lt;p&gt;&lt;p&gt;
&lt;a href=&#34;https://vid.umd.edu/detsmediasite/Play/e8009558850944bfb2cac477f8d741711d?catalog=74740199-303c-49a2-9025-2dee0a195650&#34;\
&gt;&lt;b&gt;Can
Machine Learning Ever Be Trustworthy?&lt;/b&gt;&lt;/a&gt;. University of Maryland, &lt;a href=&#34;https://ece.umd.edu/events/distinguished-colloquium-series&#34;&gt;Booz
Allen Hamilton Distinguished Colloquium&lt;/a&gt;. 7&amp;nbsp;December
2018. [&lt;a href=&#34;https://speakerdeck.com/evansuva/can-machine-learning-ever-be-trustworthy&#34;&gt;SpeakerDeck&lt;/a&gt;]
[&lt;a href=&#34;https://vid.umd.edu/detsmediasite/Play/e8009558850944bfb2cac477f8d741711d?catalog=74740199-303c-49a2-9025-2dee0a195650&#34;&gt;Video&lt;/a&gt;]
&lt;/p&gt;
&lt;p&gt;
&lt;a href=&#34;https://speakerdeck.com/evansuva/mutually-assured-destruction-and-the-impending-ai-apocalypse&#34;&gt;&lt;b&gt;Mutually
Assured Destruction and the Impending AI Apocalypse&lt;/b&gt;&lt;/a&gt;. Opening keynote, &lt;a href=&#34;https://www.usenix.org/conference/woot18&#34;&gt;12&lt;sup&gt;th&lt;/sup&gt; USENIX Workshop on Offensive Technologies&lt;/a&gt; 2018. (Co-located with &lt;em&gt;USENIX Security Symposium&lt;/em&gt;.) Baltimore, Maryland. 13 August 2018. [&lt;a href=&#34;https://speakerdeck.com/evansuva/mutually-assured-destruction-and-the-impending-ai-apocalypse&#34;&gt;SpeakerDeck&lt;/a&gt;]
&lt;/p&gt;
&lt;p&gt;
&lt;center&gt;
&lt;iframe width=&#34;640&#34; height=&#34;360&#34; src=&#34;https://www.youtube-nocookie.com/embed/sFhD6ABghf8?rel=0&#34; frameborder=&#34;0&#34; allow=&#34;autoplay; encrypted-media&#34; allowfullscreen&gt;&lt;/iframe&gt;&lt;br&gt;
&lt;/p&gt;
&lt;/center&gt;&lt;br&gt;&lt;/p&gt;
Expand Down
19 changes: 18 additions & 1 deletion public/main/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -163,6 +163,10 @@ <h2 id="projects">Projects</h2>

<h2 id="papers">Papers</h2>

<p>Xiao Zhang and David Evans. <a href="https://arxiv.org/abs/1810.09225"><em>Cost-Sensitive Robustness against Adversarial Examples</em></a>. In <a
href="https://iclr.cc/Conferences/2019"><em>Seventh International Conference on Learning Representations</em></a> (ICLR). New Orleans. May 2019. [<a href="https://arxiv.org/abs/1810.09225">arXiv</a>] [<a
href="https://openreview.net/forum?id=BygANhA9tQ">OpenReview</a>] [<a href="https://arxiv.org/pdf/1810.09225.pdf">PDF</a>]</p>

<p>Weilin Xu, David Evans, Yanjun Qi. <a href="/docs/featuresqueezing.pdf"><em>Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks</em></a>.
<a href="https://www.ndss-symposium.org/ndss2018/"><em>2018 Network and Distributed System Security Symposium</em></a>. 18-21 February, San Diego, California. Full paper (15 pages): [<a href="/docs/featuresqueezing.pdf">PDF</a>]</p>

Expand All @@ -173,7 +177,20 @@ <h2 id="papers">Papers</h2>

<h2 id="talks">Talks</h2>

<p><center>
<p><p>
<a href="https://vid.umd.edu/detsmediasite/Play/e8009558850944bfb2cac477f8d741711d?catalog=74740199-303c-49a2-9025-2dee0a195650"\
><b>Can
Machine Learning Ever Be Trustworthy?</b></a>. University of Maryland, <a href="https://ece.umd.edu/events/distinguished-colloquium-series">Booz
Allen Hamilton Distinguished Colloquium</a>. 7&nbsp;December
2018. [<a href="https://speakerdeck.com/evansuva/can-machine-learning-ever-be-trustworthy">SpeakerDeck</a>]
[<a href="https://vid.umd.edu/detsmediasite/Play/e8009558850944bfb2cac477f8d741711d?catalog=74740199-303c-49a2-9025-2dee0a195650">Video</a>]
</p>
<p>
<a href="https://speakerdeck.com/evansuva/mutually-assured-destruction-and-the-impending-ai-apocalypse"><b>Mutually
Assured Destruction and the Impending AI Apocalypse</b></a>. Opening keynote, <a href="https://www.usenix.org/conference/woot18">12<sup>th</sup> USENIX Workshop on Offensive Technologies</a> 2018. (Co-located with <em>USENIX Security Symposium</em>.) Baltimore, Maryland. 13 August 2018. [<a href="https://speakerdeck.com/evansuva/mutually-assured-destruction-and-the-impending-ai-apocalypse">SpeakerDeck</a>]
</p>
<p>
<center>
<iframe width="640" height="360" src="https://www.youtube-nocookie.com/embed/sFhD6ABghf8?rel=0" frameborder="0" allow="autoplay; encrypted-media" allowfullscreen></iframe><br>
</p>
</center><br></p>
Expand Down

0 comments on commit 51f7c01

Please sign in to comment.