[NetBehaviour] Fwd: CRYPTO-GRAM, June 15, 2019
Alan Sondheim
sondheim at gmail.com
Sat Jun 15 20:05:03 CEST 2019
I don't know if you read this or have publicized it before, but in terms of
lists, I've also found it invaluable!
---------- Forwarded message ---------
From: Bruce Schneier <schneier at schneier.com>
Date: Sat, Jun 15, 2019 at 1:45 PM
Subject: CRYPTO-GRAM, June 15, 2019
To: <sondheim at panix.com>
Crypto-Gram
June 15, 2019
by Bruce Schneier
CTO, IBM Resilient
schneier at schneier.com
https://www.schneier.com
A free monthly newsletter providing summaries, analyses, insights, and
commentaries on security: computer and otherwise.
For back issues, or to subscribe, visit Crypto-Gram's web page
<https://www.schneier.com/crypto-gram.html>.
Read this issue on the web
<https://www.schneier.com/crypto-gram/archives/2019/0615.html>
These same essays and news items appear in the Schneier on Security
<https://www.schneier.com/> blog, along with a lively and intelligent
comment section. An RSS feed is available.
** *** ***** ******* *********** *************
In this issue:
1. International Spy Museum Reopens <#m_6013911534529399173_cg1>
2. WhatsApp Vulnerability Fixed <#m_6013911534529399173_cg2>
3. Another Intel Chip Flaw <#m_6013911534529399173_cg3>
4. More Attacks against Computer Automatic Update Systems
<#m_6013911534529399173_cg4>
5. Why Are Cryptographers Being Denied Entry into the US?
<#m_6013911534529399173_cg5>
6. The Concept of "Return on Data" <#m_6013911534529399173_cg6>
7. How Technology and Politics Are Changing Spycraft
<#m_6013911534529399173_cg7>
8. Fingerprinting iPhones <#m_6013911534529399173_cg8>
9. Visiting the NSA <#m_6013911534529399173_cg9>
10. Thangrycat: A Serious Cisco Vulnerability
<#m_6013911534529399173_cg10>
11. German SG-41 Encryption Machine Up for Auction
<#m_6013911534529399173_cg11>
12. Germany Talking about Banning End-to-End Encryption
<#m_6013911534529399173_cg12>
13. NSA Hawaii <#m_6013911534529399173_cg13>
14. First American Financial Corp. Data Records Leak
<#m_6013911534529399173_cg14>
15. Alex Stamos on Content Moderation and Security
<#m_6013911534529399173_cg15>
16. Fraudulent Academic Papers <#m_6013911534529399173_cg16>
17. The Human Cost of Cyberattacks <#m_6013911534529399173_cg17>
18. The Importance of Protecting Cybersecurity Whistleblowers
<#m_6013911534529399173_cg18>
19. The Cost of Cybercrime <#m_6013911534529399173_cg19>
20. Lessons Learned Trying to Secure Congressional Campaigns
<#m_6013911534529399173_cg20>
21. Chinese Military Wants to Develop Custom OS
<#m_6013911534529399173_cg21>
22. Security and Human Behavior (SHB) 2019 <#m_6013911534529399173_cg22>
23. iOS Shortcut for Recording the Police <#m_6013911534529399173_cg23>
24. Employment Scam <#m_6013911534529399173_cg24>
25. Workshop on the Economics of Information Security
<#m_6013911534529399173_cg25>
26. Rock-Paper-Scissors Robot <#m_6013911534529399173_cg26>
27. Report on the Stalkerware Industry <#m_6013911534529399173_cg27>
28. Video Surveillance by Computer <#m_6013911534529399173_cg28>
29. Computers and Video Surveillance <#m_6013911534529399173_cg29>
30. Upcoming Speaking Engagements <#m_6013911534529399173_cg30>
** *** ***** ******* *********** *************
International Spy Museum Reopens
*[2019.05.15]*
<https://www.schneier.com/blog/archives/2019/05/international_s.html>
The International
Spy Museum <https://www.spymuseum.org> has reopened
<https://www.nytimes.com/2019/05/06/arts/design/spy-museum-washington-review.html>
in Washington, DC.
** *** ***** ******* *********** *************
WhatsApp Vulnerability Fixed
*[2019.05.15]*
<https://www.schneier.com/blog/archives/2019/05/whatsapp_vulner_1.html>
WhatsApp fixed a devastating
<https://www.nytimes.com/2019/05/13/technology/nso-group-whatsapp-spying.html>
vulnerability
<https://www.wired.com/story/whatsapp-hack-phone-call-voip-buffer-overflow/>
that allowed someone to remotely hack a phone by initiating a WhatsApp
voice call. The recipient didn't even have to answer the call.
The Israeli cyber-arms manufacturer NSO Group
<https://www.cnn.com/2019/05/14/tech/nso-whatsapp-security-breach-intl/index.html>
is believed to be behind the exploit, but of course there is no definitive
proof.
If you use WhatsApp, update your app immediately.
** *** ***** ******* *********** *************
Another Intel Chip Flaw
*[2019.05.16]*
<https://www.schneier.com/blog/archives/2019/05/another_intel_c.html>
Remember the Spectre and Meltdown attacks
<https://www.schneier.com/blog/archives/2018/01/spectre_and_mel_1.html>
from last year? They were a new class of attacks against complex CPUs,
finding subliminal channels in optimization techniques that allow hackers
to steal information. Since their discovery, researchers have found additional
similar vulnerabilities
<https://arstechnica.com/gadgets/2018/11/spectre-meltdown-researchers-unveil-7-more-speculative-execution-attacks/>
.
A whole bunch more have just been
<https://arstechnica.com/gadgets/2019/05/new-speculative-execution-bug-leaks-data-from-intel-chips-internal-buffers/>
discovered
<https://www.wired.com/story/intel-mds-attack-speculative-execution-buffer/>
.
I don't think we're finished yet. A year and a half ago I wrote: "But more
are coming, and they'll be worse. 2018 will be the year of microprocessor
vulnerabilities, and it's going to be a wild ride." I think more are still
coming.
EDITED TO ADD (6/13): A mathematical analysis
<https://arxiv.org/pdf/1902.05178.pdf> of the problem that claims we'll never
completely fix
<https://arstechnica.com/gadgets/2019/02/google-software-is-never-going-to-be-able-to-fix-spectre-type-bugs/>
this class of problems.
** *** ***** ******* *********** *************
More Attacks against Computer Automatic Update Systems
*[2019.05.16]*
<https://www.schneier.com/blog/archives/2019/05/more_attacks_ag.html> Last
month, Kaspersky discovered that Asus's live update system was infected
<https://www.vice.com/en_us/article/pan9wn/hackers-hijacked-asus-software-updates-to-install-backdoors-on-thousands-of-computers>
with
<https://www.zdnet.com/article/supply-chain-attack-installs-backdoors-through-hijacked-asus-live-update-software/>
malware
<https://www.schneier.com/blog/archives/2019/03/malware_install.html>, an
operation it called Operation Shadowhammer. Now we learn that six other
companies were
<https://www.tomshardware.com/news/operation-shadowhammer-kaspersky-asus-victims-securelist,39156.html>
targeted <https://www.kaspersky.com/blog/details-shadow-hammer/26597/> in
the same operation.
As we mentioned before, ASUS was not the only company used by the
attackers. Studying this case, our experts found other samples that used
similar algorithms. As in the ASUS case, the samples were using digitally
signed binaries from three other Asian vendors:
- Electronics Extreme, authors of the zombie survival game called
*Infestation:
Survivor Stories*,
- Innovative Extremist, a company that provides Web and IT
infrastructure services but also used to work in game development,
- Zepetto, the South Korean company that developed the video game *Point
Blank*.
According to our researchers, the attackers either had access to the source
code of the victims' projects or they injected malware at the time of
project compilation, meaning they were in the networks of those companies.
And this reminds us of an attack that we reported on a year ago: the CCleaner
incident <https://www.kaspersky.com/blog/ccleaner-supply-chain/21785/>.
Also, our experts identified three additional victims: another video gaming
company, a conglomerate holding company and a pharmaceutical company, all
in South Korea. For now we cannot share additional details about those
victims, because we are in the process of notifying them about the attack.
Me on supply chain security
<https://www.schneier.com/blog/archives/2018/05/supply-chain_se.html>.
EDITED TO ADD (6/12): Kaspersky's expanded report
<https://securelist.com/operation-shadowhammer-a-high-profile-supply-chain-attack/90380/>
.
** *** ***** ******* *********** *************
Why Are Cryptographers Being Denied Entry into the US?
*[2019.05.17]*
<https://www.schneier.com/blog/archives/2019/05/why_are_cryptog.html> In
March, Adi Shamir -- that's the "S" in RSA -- was
<https://threatpost.com/rsa-conference-2019-cryptographers-panel-decries-adi-shamirs-visa-issues/142533/>
denied
<https://www.cnet.com/news/adi-shamir-couldnt-get-us-visa-to-attend-rsa-conference-named-for-him/>
a <https://www.theregister.co.uk/2019/03/05/rsa_cofounder_us_visa_row/> US
visa to attend the RSA Conference. He's Israeli.
This month, British citizen Ross Anderson couldn't attend an awards
ceremony in DC because of visa issues. (You can listen to his recorded
acceptance speech <https://www.youtube.com/watch?v=Q--8SidkBII>.) I've
heard of two other prominent cryptographers who are in the same boat. Is
there some cryptographer blacklist? Is something else going on? A lot of us
would like to know.
** *** ***** ******* *********** *************
The Concept of "Return on Data"
*[2019.05.20]*
<https://www.schneier.com/blog/archives/2019/05/the_concept_of_.html> This
law review article by Noam Kolt, titled "Return on Data
<https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3362880>," proposes an
interesting new way of thinking of privacy law.
*Abstract:* Consumers routinely supply personal data to technology
companies in exchange for services. Yet, the relationship between the
utility (U) consumers gain and the data (D) they supply -- "return on data"
(ROD) -- remains largely unexplored. Expressed as a ratio, ROD = U / D.
While lawmakers strongly advocate protecting consumer privacy, they tend to
overlook ROD. Are the benefits of the services enjoyed by consumers, such
as social networking and predictive search, commensurate with the value of
the data extracted from them? How can consumers compare competing
data-for-services deals? Currently, the legal frameworks regulating these
transactions, including privacy law, aim primarily to protect personal
data. They treat data protection as a standalone issue, distinct from the
benefits which consumers receive. This article suggests that privacy
concerns should not be viewed in isolation, but as part of ROD. Just as
companies can quantify return on investment (ROI) to optimize investment
decisions, consumers should be able to assess ROD in order to better spend
and invest personal data. Making data-for-services transactions more
transparent will enable consumers to evaluate the merits of these deals,
negotiate their terms and make more informed decisions. Pivoting from the
privacy paradigm to ROD will both incentivize data-driven service providers
to offer consumers higher ROD, as well as create opportunities for new
market entrants.
** *** ***** ******* *********** *************
How Technology and Politics Are Changing Spycraft
*[2019.05.21]*
<https://www.schneier.com/blog/archives/2019/05/how_technology_.html>
Interesting article
<https://foreignpolicy.com/2019/04/27/the-spycraft-revolution-espionage-technology/>
about how traditional nation-based spycraft is changing. Basically, the
Internet makes it increasingly difficult to generate a good cover story;
cell phone and other electronic surveillance techniques make tracking
people easier; and machine learning will make all of this automatic.
Meanwhile, Western countries have new laws and norms that put them at a
disadvantage over other countries. And finally, much of this has gone
corporate.
** *** ***** ******* *********** *************
Fingerprinting iPhones
*[2019.05.22]*
<https://www.schneier.com/blog/archives/2019/05/fingerprinting_7.html>
This clever
attack <https://sensorid.cl.cam.ac.uk/> allows someone to uniquely identify
a phone when you visit a website, based on data from the accelerometer,
gyroscope, and magnetometer sensors.
We have developed a new type of fingerprinting attack, the calibration
fingerprinting attack. Our attack uses data gathered from the
accelerometer, gyroscope and magnetometer sensors found in smartphones to
construct a globally unique fingerprint. Overall, our attack has the
following advantages:
- The attack can be launched by any website you visit or any app you use
on a vulnerable device without requiring any explicit confirmation or
consent from you.
- The attack takes less than one second to generate a fingerprint.
- The attack can generate a globally unique fingerprint for iOS devices.
- The calibration fingerprint never changes, even after a factory reset.
- The attack provides an effective means to track you as you browse
across the web and move between apps on your phone.
* Following our disclosure, Apple has patched this vulnerability in iOS
12.2.
Research paper <https://www.ieee-security.org/TC/SP2019/papers/405.pdf>.
** *** ***** ******* *********** *************
Visiting the NSA
*[2019.05.22]*
<https://www.schneier.com/blog/archives/2019/05/visiting_the_ns.html>
Yesterday, I visited the NSA. It was Cyber Command's birthday, but that's
not why I was there. I visited as part of the Berklett Cybersecurity
Project, run out of the Berkman Klein Center and funded by the Hewlett
Foundation. (BERKman hewLETT -- get it? We have a web page
<https://cyber.harvard.edu/research/cybersecurity>, but it's badly out of
date.)
It was a full day of meetings, all unclassified but under the Chatham House
Rule. Gen. Nakasone welcomed us and took questions at the start. Various
senior officials spoke with us on a variety of topics, but mostly focused
on three areas:
- Russian influence operations, both what the NSA and US Cyber Command
did during the 2018 election and what they can do in the future;
- China and the threats to critical infrastructure from untrusted
computer hardware, both the 5G network and more broadly;
- Machine learning, both how to ensure a ML system is compliant with all
laws, and how ML can help with other compliance tasks.
It was all interesting. Those first two topics are ones that I am thinking
and writing about, and it was good to hear their perspective. I find that I
am much more closely aligned with the NSA about cybersecurity than I am
about privacy, which made the meeting much less fraught than it would have
been if we were discussing Section 702 of the FISA Amendments Act, Section
215 the USA Freedom Act (up for renewal
<https://www.washingtonexaminer.com/policy/patriot-act-renewal-gives-privacy-advocates-an-opening>
next year), or any 4th Amendment violations. I don't think we're past those
issues by any means, but they make up less of what I am working on.
** *** ***** ******* *********** *************
Thangrycat: A Serious Cisco Vulnerability
*[2019.05.23]*
<https://www.schneier.com/blog/archives/2019/05/thangrycat_a_se.html>
Summary <https://thrangrycat.com/>:
Thangrycat is caused by a series of hardware design flaws within Cisco's
Trust Anchor module. First commercially introduced in 2013, Cisco Trust
Anchor module (TAm) is a proprietary hardware security module used in a
wide range of Cisco products, including enterprise routers, switches and
firewalls. TAm is the root of trust that underpins all other Cisco security
and trustworthy computing mechanisms in these devices. Thangrycat allows an
attacker to make persistent modification to the Trust Anchor module via
FPGA bitstream modification, thereby defeating the secure boot process and
invalidating Cisco's chain of trust at its root. While the flaws are based
in hardware, Thangrycat can be exploited remotely without any need for
physical access. Since the flaws reside within the hardware design, it is
unlikely that any software security patch will fully resolve the
fundamental security vulnerability.
>From a news article
<https://www.nytimes.com/2019/05/21/opinion/internet-security.html>:
Thrangrycat is awful for two reasons. First, if a hacker exploits this
weakness, they can do whatever they want to your routers. Second, the
attack can happen remotely it's a software vulnerability. But the fix can
only be applied at the hardware level. Like, physical router by physical
router. In person. Yeesh.
That said, Thrangrycat only works once you have administrative access to
the device. You need a two-step attack in order to get Thrangrycat working.
Attack #1 gets you remote administrative access, Attack #2 is Thrangrycat.
Attack #2 can't happen without Attack #1. Cisco can protect you from Attack
#1 by sending out a software update. If your I.T. people have your systems
well secured and are applying updates and patches consistently and you're
not a regular target of nation-state actors, you're relatively safe from
Attack #1, and therefore, pretty safe from Thrangrycat.
Unfortunately, Attack #1 is a garden variety vulnerability. Many systems
don't even have administrative access configured correctly. There's
opportunity for Thrangrycat to be exploited.
And from Boing Boing
<https://boingboing.net/2019/05/22/introspection-engines.html>:
Thangrycat relies on attackers being able to run processes as the system's
administrator, and Red Balloon, the security firm that disclosed the
vulnerability, also revealed a defect that allows attackers to run code as
admin.
It's tempting to dismiss the attack on the trusted computing module as a
ho-hum flourish: after all, once an attacker has root on your system, all
bets are off. But the promise of trusted computing is that computers will
be able to detect and undo this kind of compromise, by using a separate,
isolated computer to investigate and report on the state of the main system
(Huang and Snowden call this an introspection engine
<https://boingboing.net/2017/09/08/impaired-judgment-phones.html>). Once
this system is compromised, it can be forced to give false reports on the
state of the system: for example, it might report that its OS has been
successfully updated to patch a vulnerability when really the update has
just been thrown away.
As Charlie Warzel and Sarah Jeong discuss in the New York Times
<https://www.nytimes.com/2019/05/21/opinion/internet-security.html>, this
is an attack that can be executed remotely, but can only be detected by
someone physically in the presence of the affected system (and only then
after a very careful inspection, and there may still be no way to do
anything about it apart from replacing the system or at least the
compromised component).
** *** ***** ******* *********** *************
German SG-41 Encryption Machine Up for Auction
*[2019.05.23]*
<https://www.schneier.com/blog/archives/2019/05/german_sg-41_en.html> A
German auction house is selling
<https://www.hermann-historica.de/de/auctions/lot/id/13004> an SG-41. It
looks beautiful
<http://scienceblogs.de/klausis-krypto-kolumne/2019/05/21/rare-ww2-encryption-machine-up-for-auction/>.
Starting price is 75,000 euros. My guess is that it will sell for around
100K euros.
EDITED TO ADD (6/13): It sold for 98K euros.
** *** ***** ******* *********** *************
Germany Talking about Banning End-to-End Encryption
*[2019.05.24]*
<https://www.schneier.com/blog/archives/2019/05/germany_talking.html> *Der
Spiegel* is reporting
<https://www.spiegel.de/netzwelt/netzpolitik/horst-seehofer-will-messengerdienste-zum-entschluesseln-zwingen-a-1269121.html>
that the German Ministry for Internal Affairs is planning to require all
Internet message services to provide plaintext messages on demand,
basically outlawing strong end-to-end encryption. Anyone not complying will
be blocked, although the article doesn't say how. (Cory Doctorow has previously
explained <https://boingboing.net/2017/06/04/theresa-may-king-canute.html>
why this would be impossible.)
The article is in German, and I would appreciate additional information
from those who can speak the language.
EDITED TO ADD (6/2): Slashdot thread
<https://it.slashdot.org/story/19/06/01/0035255/a-german-minister-wants-to-ban-end-to-end-chat-encryption>.
This seems to be nothing more than political grandstanding: see this post
<https://carnegieendowment.org/2019/05/30/encryption-debate-in-germany-pub-79215>
from the Carnegie Endowment for International Peace.
** *** ***** ******* *********** *************
NSA Hawaii
*[2019.05.24]*
<https://www.schneier.com/blog/archives/2019/05/nsa_hawaii.html> Recently
I've heard Edward Snowden talk about his working at the NSA in Hawaii as
being "under a pineapple field." CBS News recently ran a segment
<https://www.cbsnews.com/news/nsa-hawaii-exclusive-inside-look-front-lines-intelligence-gathering/>
on that NSA listening post on Oahu.
Not a whole lot of actual information. "We're in office building, in a
pineapple field, on Oahu...." And part of it is underground -- we see a
tunnel. We didn't get to see any pineapples, though.
** *** ***** ******* *********** *************
First American Financial Corp. Data Records Leak
*[2019.05.28]*
<https://www.schneier.com/blog/archives/2019/05/first_american_.html> Krebs
on Security is reporting
<https://krebsonsecurity.com/2019/05/first-american-financial-corp-leaked-hundreds-of-millions-of-title-insurance-records/>
a massive data leak by the real estate title insurance company First
American Financial Corp.
"The title insurance agency collects all kinds of documents from both the
buyer and seller, including Social Security numbers, drivers licenses,
account statements, and even internal corporate documents if you're a small
business. You give them all kinds of private information and you expect
that to stay private."
Shoval shared a document link he'd been given by First American from a
recent transaction, which referenced a record number that was nine digits
long and dated April 2019. Modifying the document number in his link by
numbers in either direction yielded other peoples' records before or after
the same date and time, indicating the document numbers may have been
issued sequentially.
The earliest document number available on the site -- 000000075 --
referenced a real estate transaction from 2003. From there, the dates on
the documents get closer to real time with each forward increment in the
record number.
This is not an uncommon vulnerability: documents without security, just
"protected" by a unique serial number that ends up being easily guessable.
Krebs has no evidence that anyone harvested all this data, but that's not
the point. The company said this in a statement: "At First American,
security, privacy and confidentiality are of the highest priority and we
are committed to protecting our customers' information." That's obviously
not true; security and privacy are probably pretty low priorities for the
company. This is basic stuff, and companies like First America Corp. should
be held liable for their poor security practices.
** *** ***** ******* *********** *************
Alex Stamos on Content Moderation and Security
*[2019.05.29]*
<https://www.schneier.com/blog/archives/2019/05/alex_stamos_on.html> Really
interesting talk <https://www.youtube.com/watch?v=ATmQj787Jcc> by former
Facebook CISO Alex Stamos about the problems inherent in content moderation
by social media platforms. Well worth watching.
** *** ***** ******* *********** *************
Fraudulent Academic Papers
*[2019.05.30]*
<https://www.schneier.com/blog/archives/2019/05/fraudulent_acad.html> The
term "fake news" has lost much of its meaning, but it describes a real and
dangerous Internet trend. Because it's hard for many people to
differentiate a real news site from a fraudulent one, they can be
hoodwinked by fictitious news stories pretending to be real. The result is
that otherwise reasonable people believe lies.
The trends fostering fake news are more general, though, and we need to
start thinking about how it could affect different areas of our lives. In
particular, I worry about how it will affect academia. In addition to fake
news, I worry about fake research.
An example of this seems to have happened recently in the cryptography
field. SIMON <https://en.wikipedia.org/wiki/Simon_(cipher)> is a block
cipher designed by the National Security Agency (NSA) and made public in
2013. It's a general design optimized for hardware implementation, with a
variety of block sizes and key lengths. Academic cryptanalysts have been
trying to break the cipher since then, with some pretty
<https://eprint.iacr.org/2015/666.pdf> good
<https://www.hindawi.com/journals/scn/2018/5160237/> results
<https://eprint.iacr.org/2018/152.pdf>, although the NSA's specified
parameters are still immune to attack. Last week, a paper
<https://www.schneier.com/blog/archives/2019/05/cryptanalysis_o_4.html>
appeared on the International Association for Cryptologic Research (IACR)
ePrint archive purporting to demonstrate a much more effective break of
SIMON, one that would affect actual implementations. The paper was
sufficiently weird, the authors sufficiently unknown and the details of the
attack sufficiently absent, that the editors took it down a few days later.
No harm done in the end.
In recent years, there has been a push to speed up the process of
disseminating research results. Instead of the laborious process of
academic publication, researchers have turned to faster online publishing
processes, preprint servers, and simply posting research results. The IACR
ePrint archive is one of those alternatives. This has all sorts of
benefits, but one of the casualties is the process of peer review. As
flawed as that process is, it does help ensure the accuracy of results. (Of
course, bad papers can still make it through the process. We're still
dealing with the aftermath of a flawed, and now retracted, Lancet paper
<https://www.autism-watch.org/news/lancet.shtml> linking vaccines with
autism.)
Like the news business, academic publishing is subject to abuse. We can
only speculate about the motivations of the three people who are listed as
authors on the SIMON paper, but you can easily imagine better-executed and
more nefarious scenarios. In a world of competitive research, one group
might publish a fake result to throw other researchers off the trail. It
might be a company trying to gain an advantage over a potential competitor,
or even a country trying to gain an advantage over another country.
Reverting to a slower and more accurate system isn't the answer; the world
is just moving too fast for that. We need to recognize that fictitious
research results can now easily be injected into our academic publication
system, and tune our skepticism meters accordingly.
This essay previously appeared
<https://www.lawfareblog.com/when-fake-news-comes-academia> on Lawfare.com.
** *** ***** ******* *********** *************
The Human Cost of Cyberattacks
*[2019.05.31]*
<https://www.schneier.com/blog/archives/2019/05/the_human_cost_.html> The
International Committee of the Red Cross has just published a report: "The
Potential Human Cost of Cyber-Operations
<https://www.icrc.org/en/download/file/96008/the-potential-human-cost-of-cyber-operations.pdf>."
It's the result of an "ICRC Expert Meeting" from last year, but was
published this week.
Here's a shorter blog post
<https://blogs.icrc.org/law-and-policy/2019/05/29/potential-human-costs-cyber-operations-key-icrc-takeaways-discussion-tech-experts/>
if you don't want to read the whole thing. And commentary
<https://blog.lukaszolejnik.com/icrc-report-on-cyberoperations/> by one of
the authors.
** *** ***** ******* *********** *************
The Importance of Protecting Cybersecurity Whistleblowers
*[2019.06.03]*
<https://www.schneier.com/blog/archives/2019/06/the_importance_3.html>
Interesting essay
<https://wp.nyu.edu/compliance_enforcement/2019/05/30/effective-cybersecurity-and-data-protection-legislation-should-protect-whistleblowers/>
arguing that we need better legislation to protect cybersecurity
whistleblowers.
Congress should act to protect cybersecurity whistleblowers because
information security has never been so important, or so challenging. In the
wake of a barrage of shocking revelations about data breaches and companies
mishandling of customer data, a bipartisan consensus has emerged in support
of legislation to give consumers more control over their personal
information, require companies to disclose how they collect and use
consumer data, and impose penalties for data breaches and misuse of
consumer data. The Federal Trade Commission ("FTC") has been held out as
the best agency to implement this new regulation. But for any such
legislation to be effective, it must protect the courageous whistleblowers
who risk their careers to expose data breaches and unauthorized use of
consumers' private data.
Whistleblowers strengthen regulatory regimes, and cybersecurity regulation
would be no exception. Republican and Democratic leaders from the executive
and legislative branches have extolled the virtues of whistleblowers.
High-profile cases abound. Recently, Christopher Wylie exposed Cambridge
Analytica's misuse of Facebook user data to manipulate voters, including
its apparent theft of data from 50 million Facebook users as part of a
psychological profiling campaign. Though additional research is needed, the
existing empirical data reinforces the consensus that whistleblowers help
prevent, detect, and remedy misconduct. Therefore it is reasonable to
conclude that protecting and incentivizing whistleblowers could help the
government address the many complex challenges facing our nation's
information systems.
** *** ***** ******* *********** *************
The Cost of Cybercrime
*[2019.06.04]*
<https://www.schneier.com/blog/archives/2019/06/the_cost_of_cyb_1.html>
Really interesting paper
<https://weis2019.econinfosec.org/wp-content/uploads/sites/6/2019/05/WEIS_2019_paper_25.pdf>
calculating the worldwide cost of cybercrime:
*Abstract:* In 2012 we presented the first systematic study of the costs of
cybercrime. In this paper, we report what has changed in the seven years
since. The period has seen major platform evolution, with the mobile phone
replacing the PC and laptop as the consumer terminal of choice, with
Android replacing Windows, and with many services moving to the cloud. The
use of social networks has become extremely widespread. The executive
summary is that about half of all property crime, by volume and by value,
is now online. We hypothesised in 2012 that this might be so; it is now
established by multiple victimisation studies. Many cybercrime patterns
appear to be fairly stable, but there are some interesting changes. Payment
fraud, for example, has more than doubled in value but has fallen slightly
as a proportion of payment value; the payment system has simply become
bigger, and slightly more efficient. Several new cybercrimes are
significant enough to mention, including business email compromise and
crimes involving cryptocurrencies. The move to the cloud means that system
misconfiguration may now be responsible for as many breaches as phishing.
Some companies have suffered large losses as a side-effect of
denial-of-service worms released by state actors, such as NotPetya; we have
to take a view on whether they count as cybercrime. The infrastructure
supporting cybercrime, such as botnets, continues to evolve, and specific
crimes such as premium-rate phone scams have evolved some interesting
variants. The over-all picture is the same as in 2012: traditional offences
that are now technically 'computercrimes' such as tax and welfare fraud
cost the typical citizen in the low hundreds of Euros/dollars a year;
payment frauds and similar offences, where the modus operandi has been
completely changed by computers, cost in the tens; while the new computer
crimes cost in the tens of cents. Defending against the platforms used to
support the latter two types of crime cost citizens in the tens of dollars.
Our conclusions remain broadly the same as in 2012: it would be
economically rational to spend less in anticipation of cybercrime (on
antivirus, firewalls, etc.) and more on response. We are particularly bad
at prosecuting criminals who operate infrastructure that other wrongdoers
exploit. Given the growing realisation among policymakers that crime hasn't
been falling over the past decade, merely moving online, we might
reasonably hope for better funded and coordinated law-enforcement action.
Richard Clayton gave a presentation on this yesterday at WEIS. His final
slide contained a summary.
- Payment fraud is up, but credit card sales are up even more -- so
we're winning.
- Cryptocurrencies are enabling new scams, but the big money is still
being lost in more traditional investment fraud.
- Telcom fraud is down, basically because Skype is free.
- Anti-virus fraud has almost disappeared, but tech support scams are
growing very rapidly.
- The big money is still in tax fraud, welfare fraud, VAT fraud, and so
on.
- We spend more money on cyber defense than we do on the actual losses.
- Criminals largely act with impunity. They don't believe they will get
caught, and mostly that's correct.
Bottom line: the technology has changed a lot since 2012, but the economic
considerations remain unchanged.
** *** ***** ******* *********** *************
Lessons Learned Trying to Secure Congressional Campaigns
*[2019.06.05]*
<https://www.schneier.com/blog/archives/2019/06/lessons_learned_1.html> Really
interesting
<https://idlewords.com/2019/05/what_i_learned_trying_to_secure_congressional_campaigns.htm>
first-hand experience from Maciej Cegłowski.
** *** ***** ******* *********** *************
Chinese Military Wants to Develop Custom OS
*[2019.06.06]*
<https://www.schneier.com/blog/archives/2019/06/chinese_militar.html>
Citing security concerns, the Chinese military wants to replace Windows
with its own custom operating system
<https://www.zdnet.com/article/chinese-military-to-replace-windows-os-amid-fears-of-us-hacking/>
:
Thanks to the Snowden, Shadow Brokers, and Vault7 leaks, Beijing officials
are well aware of the US' hefty arsenal of hacking tools, available for
anything from smart TVs to Linux servers, and from routers to common
desktop operating systems, such as Windows and Mac.
Since these leaks have revealed that the US can hack into almost anything,
the Chinese government's plan is to adopt a "security by obscurity"
approach and run a custom operating system that will make it harder for
foreign threat actors -- mainly the US -- to spy on Chinese military
operations.
It's unclear exactly how custom this new OS will be. It could be a Linux
variant, like North Korea's Red Star OS
<https://www.extremetech.com/computing/219963-north-koreas-linux-based-red-star-os-is-as-oppressive-as-youd-expect>.
Or it could be something completely new. Normally, I would be highly
skeptical of a country being able to write and field its own custom
operating system, but China is one of the few that is large enough to
actually be able to do it. So I'm just moderately skeptical.
EDITED TO ADD (6/12): Russia also wants to develop
<https://www.terabitweb.com/2019/06/01/astra-linux-russia-army-html/> its
own flavor of Linux.
** *** ***** ******* *********** *************
Security and Human Behavior (SHB) 2019
*[2019.06.06]*
<https://www.schneier.com/blog/archives/2019/06/security_and_hu_8.html>
Today is the second day of the twelfth Workshop on Security and Human
Behavior <https://www.schneier.com/shb2019/>, which I am hosting at Harvard
University.
SHB is a small, annual, invitational workshop of people studying various
aspects of the human side of security, organized each year by Alessandro
Acquisti, Ross Anderson, and myself. The 50 or so people in the room
include psychologists, economists, computer security researchers,
sociologists, political scientists, criminologists, neuroscientists,
designers, lawyers, philosophers, anthropologists, business school
professors, and a smattering of others. It's not just an interdisciplinary
event; most of the people here are individually interdisciplinary.
The goal is to maximize discussion and interaction. We do that by putting
everyone on panels, and limiting talks to 7-10 minutes. The rest of the
time is left to open discussion. Four hour-and-a-half panels per day over
two days equals eight panels; six people per panel means that 48 people get
to speak. We also have lunches, dinners, and receptions -- all designed so
people from different disciplines talk to each other.
I invariably find this to be the most intellectually stimulating two days
of my professional year. It influences my thinking in many different, and
sometimes surprising, ways.
This year's program is here <https://www.schneier.com/shb2019/schedule/>. This
page <https://www.schneier.com/shb2019/participants/> lists the
participants and includes links to some of their work. As he does every
year, Ross Anderson is liveblogging the talks
<https://www.lightbluetouchpaper.org/2019/06/05/shb-2019-liveblog/> --
remotely, because he was denied a visa
<https://www.schneier.com/blog/archives/2019/05/why_are_cryptog.html>
earlier this year.
Here are my posts on the first
<http://www.schneier.com/blog/archives/2008/06/security_and_hu.html>, second
<http://www.schneier.com/blog/archives/2009/06/second_shb_work.html>, third
<http://www.schneier.com/blog/archives/2010/06/third_shb_works.html>, fourth
<http://www.schneier.com/blog/archives/2011/06/fourth_shb_work.html>, fifth
<https://www.schneier.com/blog/archives/2012/06/security_and_hu_1.html>,
sixth
<https://www.schneier.com/blog/archives/2013/06/security_and_hu_2.html>,
seventh
<https://www.schneier.com/blog/archives/2014/06/security_and_hu_3.html>,
eighth
<https://www.schneier.com/blog/archives/2015/06/security_and_hu_4.html>,
ninth
<https://www.schneier.com/blog/archives/2016/06/security_and_hu_5.html>,
tenth
<https://www.schneier.com/blog/archives/2017/05/security_and_hu_6.html>,
and eleventh
<https://www.schneier.com/blog/archives/2018/05/security_and_hu_7.html> SHB
workshops. Follow those links to find summaries, papers, and occasionally
audio recordings of the various workshops. Ross also maintains a good
webpage of psychology and security resources
<https://www.cl.cam.ac.uk/~rja14/psysec.html>.
** *** ***** ******* *********** *************
iOS Shortcut for Recording the Police
*[2019.06.07]*
<https://www.schneier.com/blog/archives/2019/06/ios_shortcut_fo.html> "Hey
Siri; I'm getting pulled over
<https://www.businessinsider.com/ios-12-shortcut-uses-iphone-to-record-police-during-traffic-stop-2018-10>"
can be a shortcut:
Once the shortcut is installed
<https://www.icloud.com/shortcuts/2d68cb1ee7b84f08ace2fd600b9855b5> and
configured <https://support.apple.com/en-us/HT209055>, you just have to
say, for example, "Hey Siri, I'm getting pulled over." Then the program
pauses music you may be playing, turns down the brightness on the iPhone,
and turns on "do not disturb" mode.
It also sends a quick text to a predetermined contact to tell them you've
been pulled over, and it starts recording using the iPhone's front-facing
camera. Once you've stopped recording, it can text or email the video to a
different predetermined contact and save it to Dropbox.
** *** ***** ******* *********** *************
Employment Scam
*[2019.06.10]*
<https://www.schneier.com/blog/archives/2019/06/employment_scam.html>
Interesting story
<https://arstechnica.com/gadgets/2019/06/scamming-the-scammers-how-i-sniffed-out-and-fought-a-cash-hungry-employment-scam/>
of an old-school remote-deposit capture fraud scam, wrapped up in a fake
employment scam.
Slashdot thread
<https://yro.slashdot.org/story/19/06/02/1848209/scammers-try-elaborate-fake-job-interviews-on-google-hangouts>
.
** *** ***** ******* *********** *************
Workshop on the Economics of Information Security
*[2019.06.11]*
<https://www.schneier.com/blog/archives/2019/06/workshop_on_the_1.html>
Last week, I hosted the eighteenth Workshop on the Economics of Information
Security <https://weis2019.econinfosec.org/> at Harvard. Ross Anderson
liveblogged
the talks
<https://www.lightbluetouchpaper.org/2019/06/03/weis-2019-liveblog/>.
** *** ***** ******* *********** *************
Rock-Paper-Scissors Robot
*[2019.06.12]*
<https://www.schneier.com/blog/archives/2019/06/rock-paper-scis.html> How
in the world did I not know about this for three years?
Researchers at the University of Tokyo have developed a robot
<https://www.extremetech.com/extreme/214512-rock-paper-scissors-robot-wins-100-of-the-time>
that always wins at rock-paper-scissors. It watches the human player's
hand, figures out which finger position the human is about to deploy, and
reacts quickly enough to always win.
EDITED TO ADD (6/13): Seems like this is even older -- from 2013
<https://www.extremetech.com/extreme/214512-rock-paper-scissors-robot-wins-100-of-the-time>
.
** *** ***** ******* *********** *************
Report on the Stalkerware Industry
*[2019.06.13]*
<https://www.schneier.com/blog/archives/2019/06/report_on_the_s.html>
Citizen Lab just published an excellent report
<https://citizenlab.ca/2019/06/the-predator-in-your-pocket-a-multidisciplinary-assessment-of-the-stalkerware-application-industry/>
on the stalkerware industry.
Boing Boing post <https://boingboing.net/2019/06/12/tech-for-abusers.html>.
** *** ***** ******* *********** *************
Video Surveillance by Computer
*[2019.06.14]*
<https://www.schneier.com/blog/archives/2019/06/video_surveilla.html> The
ACLU's Jay Stanley has just published a fantastic report: "The Dawn of
Robot Surveillance <https://www.aclu.org/report/dawn-robot-surveillance>"
(blog post here
<https://www.aclu.org/blog/privacy-technology/surveillance-technologies/32-billion-industry-could-turn-americas-50-million>)
Basically, it lays out a future of ubiquitous video cameras watched by
increasingly sophisticated video analytics software, and discusses the
potential harms to society.
I'm not going to excerpt a piece, because you really need to read the whole
thing.
** *** ***** ******* *********** *************
Computers and Video Surveillance
*[2019.06.14]*
<https://www.schneier.com/blog/archives/2019/06/computers_and_video.html>
It used to be that surveillance cameras were passive. Maybe they just
recorded, and no one looked at the video unless they needed to. Maybe a
bored guard watched a dozen different screens, scanning for something
interesting. In either case, the video was only stored for a few days
because storage was expensive.
Increasingly, none of that is true
<https://www.aclu.org/report/dawn-robot-surveillance>. Recent developments
in video analytics -- fueled by artificial intelligence techniques like
machine learning -- enable computers to watch and understand surveillance
videos with human-like discernment. Identification technologies make it
easier to automatically figure out who is in the videos. And finally, the
cameras themselves have become cheaper, more ubiquitous, and much better;
cameras mounted on drones can effectively watch an entire city. Computers
can watch all the video without human issues like distraction, fatigue,
training, or needing to be paid. The result is a level of surveillance that
was impossible just a few years ago.
An ACLU report published Thursday called "the Dawn of Robot Surveillance
<https://www.aclu.org/report/dawn-robot-surveillance>" says AI-aided video
surveillance "won't just record us, but will also make judgments about us
based on their understanding of our actions, emotions, skin color,
clothing, voice, and more. These automated 'video analytics' technologies
threaten to fundamentally change the nature of surveillance."
Let's take the technologies one at a time. First: video analytics.
Computers are getting better at recognizing what's going on in a video.
Detecting when a person or vehicle enters a forbidden area
<https://www.intelli-vision.com/intelligent-video-analytics/> is easy.
Modern systems can alarm when someone is walking in the wrong direction
<https://www.axis.com/en-us/products/camera-applications/application-gallery>
-- going in through an exit-only corridor, for example. They can count
people or cars. They can detect when luggage is left unattended, or when
previously unattended luggage is picked up and removed. They can detect
when someone is loitering
<http://www.technoaware.com/2018/en/products/vtrack/> in an area, is lying
down
<https://www.ibm.com/support/knowledgecenter/SS88XH_2.0.0/iva/admin_configure_lyingb.html>,
or is running
<https://www.security.honeywell.com/me/-/media/SecurityME/Resources/ProductDocuments/HVSHVA4903ME0313DSE-pdf.pdf>.
Increasingly, they can detect particular actions by people. Amazon's
cashier-less stores rely on video analytics
<https://www.washingtonpost.com/news/business/wp/2018/01/22/inside-amazon-go-the-camera-filled-convenience-store-that-watches-you-back/>
to figure out when someone picks an item off a shelf and doesn't put it
back.
More than identifying actions, video analytics allow computers to
understand what's going on in a video: They can flag people based on their
clothing <https://www.ri.cmu.edu/pub_files/2012/10/Kitani-ECCV2012.pdf> or
behavior, identify people's emotions
<https://slate.com/technology/2014/06/emotient-vibraimage-we-need-to-regulate-emotion-detecting-technology.html>
through body language and behavior, and find people who are acting
"unusual" based on everyone else around them. Those same Amazon in-store
cameras can analyze customer sentiment
<https://aws.amazon.com/blogs/machine-learning/shopper-sentiment-analyzing-in-store-customer-experience/>.
Other systems can describe what's happening in a video scene.
Computers can also identify people. AIs are getting better at identifying
people in those videos. Facial recognition technology is improving all the
time, made easier by the enormous stockpile of tagged photographs we give
to Facebook and other social media sites, and the photos governments
collect in the process of issuing ID cards and drivers licenses
<https://www.vice.com/en_us/article/xwny7d/mark-cuban-facial-recognition-suspect-technologies>.
The technology already exists to automatically identify
<https://www.cnbc.com/2019/05/16/this-chinese-facial-recognition-start-up-can-id-a-person-in-seconds.html>
everyone a camera "sees" in real time. Even without video identification,
we can be identified by the unique information continuously broadcasted by
the smartphones
<https://www.technologyreview.com/s/427687/if-you-have-a-smart-phone-anyone-can-now-track-your-every-move/>
we carry with us everywhere, or by our laptops or Bluetooth-connected
devices. Police have been tracking phones
<https://www.aclu.org/issues/privacy-technology/surveillance-technologies/stingray-tracking-devices-whos-got-them>
for years, and this practice can now be combined with video analytics.
Once a monitoring system identifies people, their data can be combined with
other data, either collected or purchased: from cell phone records, GPS
surveillance history, purchasing data, and so on. Social media companies
like Facebook have spent years learning about our personalities and beliefs
by what we post, comment on, and "like." This is "data inference
<https://www.nytimes.com/2019/04/21/opinion/computational-inference.html>,"
and when combined with video it offers a powerful window into people's
behaviors and motivations.
Camera resolution is also improving. Gigapixel cameras as so good that they
can capture individual faces and identify license places
<https://bgr.com/2018/12/22/bigpixel-shanghai-photo-195-gigapixels/> in
photos taken miles away. "Wide-area surveillance" cameras can be
mounted on airplanes
and drones
<https://www.extremetech.com/extreme/146909-darpa-shows-off-1-8-gigapixel-surveillance-drone-can-spot-a-terrorist-from-20000-feet>,
and can operate continuously. On the ground, cameras can be hidden in
street lights
<https://qz.com/1458475/the-dea-and-ice-are-hiding-surveillance-cameras-in-streetlights/>
and other regular objects. In space, satellite cameras have also
dramatically
<https://www.businessinsider.com/satellite-image-resolution-keeps-improving-2015-10>
improved
<https://www.popsci.com/gaofen-4-worlds-most-powerful-geo-spy-satellite-continues-chinas-great-leap-forward-into-space>
.
Data storage has become incredibly cheap, and cloud storage makes it all so
easy. Video data can easily be saved for years, allowing computers to
conduct all of this surveillance backwards in time.
In democratic countries, such surveillance is marketed as crime prevention
-- or counterterrorism. In countries like China, it is blatantly used
to suppress
political activity
<https://www.nytimes.com/2019/05/22/world/asia/china-surveillance-xinjiang.html>
and for social control. In all instances, it's being implemented without a
lot of public debate by law-enforcement agencies and by corporations in
public spaces they control.
This is bad, because ubiquitous surveillance will drastically change our
relationship to society. We've never lived in this sort of world, even
those of us who have lived through previous totalitarian regimes. The
effects will be felt in many different areas. False positives -- when the
surveillance system gets it wrong -- will lead to harassment and worse.
Discrimination
<https://www.nytimes.com/2018/02/09/technology/facial-recognition-race-artificial-intelligence.html>
will become automated. Those who fall outside norms will be marginalized.
And most importantly, the inability to live anonymously will have an
enormous chilling effect
<https://www.wired.com/story/mcsweeneys-excerpt-the-right-to-experiment/>
on speech and behavior, which in turn will hobble society's ability to
experiment and change. A recent ACLU report
<https://www.aclu.org/report/dawn-robot-surveillance> discusses these harms
in more depth. While it's possible that some of this surveillance is worth
the trade-offs, we as society need to deliberately and intelligently make
decisions about it.
Some jurisdictions are starting to notice. Last month, San Francisco became
<https://www.vox.com/recode/2019/5/14/18623897/san-francisco-facial-recognition-ban-explained>
the first city to ban facial recognition technology
<https://www.nytimes.com/2019/05/14/us/facial-recognition-ban-san-francisco.html>
by police and other government agencies. A similar ban is being considered
in Somerville, MA, and Oakland, CA. These are exceptions, and limited to
the more liberal areas of the country.
We often believe that technological change is inevitable, and that there's
nothing we can do to stop it -- or even to steer it. That's simply not
true. We're led to believe this because we don't often see it, understand
it, or have a say in how or when it is deployed. The problem is that
technologies of cameras, resolution, machine learning, and artificial
intelligence are complex and specialized.
Laws like what was just passed in San Francisco won't stop the development
of these technologies, but they're not intended to. They're intended as
pauses, so our policy making can catch up with technology. As a general
rule, the US government tends to ignore technologies as they're being
developed and deployed, so as not to stifle innovation. But as the rate of
technological change increases, so does the unanticipated effects on our
lives. Just as we've been surprised by the threats to democracy caused by
surveillance capitalism, AI-enabled video surveillance will have similar
surprising effects. Maybe a pause in our headlong deployment of these
technologies will allow us the time to discuss what kind of society we want
to live in, and then enact rules to bring that kind of society about.
This essay previously appeared
<https://www.vice.com/en_us/article/bj93z5/ai-has-made-video-surveillance-automated-and-terrifying>
on Vice Motherboard.
** *** ***** ******* *********** *************
Upcoming Speaking Engagements
*[2019.06.14]*
<https://www.schneier.com/blog/archives/2019/06/upcoming_speaki_6.html>
This is a current list of where and when I am scheduled to speak:
- I'm speaking on "Securing a World of Physically Capable Computers
<https://www.eventbrite.co.uk/e/securing-a-world-of-physically-capable-computers-with-bruce-schneier-tickets-61285679116>"
at Oxford University on Monday, June 17, 2019.
The list is maintained on this page <https://www.schneier.com/events/>.
** *** ***** ******* *********** *************
Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing
summaries, analyses, insights, and commentaries on security technology. To
subscribe, or to read back issues, see Crypto-Gram's web page
<https://www.schneier.com/crypto-gram.html>.
You can also read these articles on my blog, Schneier on Security
<https://www.schneier.com>.
Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues
and friends who will find it valuable. Permission is also granted to
reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.
Bruce Schneier is an internationally renowned security technologist, called
a security guru by the Economist. He is the author of over one dozen books
-- including his latest, Click Here to Kill Everybody
<https://www.schneier.com/books/click_here/> -- as well as hundreds of
articles, essays, and academic papers. His newsletter and blog are read by
over 250,000 people. Schneier is a fellow at the Berkman Klein Center for
Internet and Society at Harvard University; a Lecturer in Public Policy at
the Harvard Kennedy School; a board member of the Electronic Frontier
Foundation, AccessNow, and the Tor Project; and an advisory board member of
EPIC and VerifiedVoting.org. He is also a special advisor to IBM Security.
Crypto-Gram is a personal newsletter. Opinions expressed are not
necessarily those of IBM or IBM Security.
Copyright © 2019 by Bruce Schneier.
** *** ***** ******* *********** *************
Mailing list hosting graciously provided by MailChimp
<https://mailchimp.com/>. Sent without web bugs or link tracking.
This e-mail was sent to: sondheim at panix.com
*You are receiving this e-mail because you subscribed to the Crypto-Gram
newsletter.*
unsubscribe from this list
<https://schneier.us18.list-manage.com/unsubscribe?u=f99e2b5ca82502f48675978be&id=22184111ab&e=9954231fa6&c=fae8acb893>
update subscription preferences
<https://schneier.us18.list-manage.com/profile?u=f99e2b5ca82502f48675978be&id=22184111ab&e=9954231fa6>
Bruce Schneier · Harvard Kennedy School · 1 Brattle Square · Cambridge, MA
02138 · USA
--
*=====================================================*
*directory http://www.alansondheim.org <http://www.alansondheim.org> tel
718-813-3285**email sondheim ut panix.com <http://panix.com>, sondheim ut
gmail.com <http://gmail.com>*
*=====================================================*
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.netbehaviour.org/pipermail/netbehaviour/attachments/20190615/3d983b32/attachment.htm>
More information about the NetBehaviour
mailing list