[NetBehaviour] Fwd: CRYPTO-GRAM, October 15, 2020

Alan Sondheim sondheim at gmail.com
Fri Oct 16 04:47:06 CEST 2020

Vastly interesting including the section on acedia -

---------- Forwarded message ---------
From: Bruce Schneier <schneier at schneier.com>
Date: Thu, Oct 15, 2020 at 9:22 AM
Subject: CRYPTO-GRAM, October 15, 2020
To: <sondheim at panix.com>

October 15, 2020

by Bruce Schneier
Fellow and Lecturer, Harvard Kennedy School
schneier at schneier.com

A free monthly newsletter providing summaries, analyses, insights, and
commentaries on security: computer and otherwise.

For back issues, or to subscribe, visit Crypto-Gram's web page

Read this issue on the web

These same essays and news items appear in the Schneier on Security
<https://www.schneier.com/> blog, along with a lively and intelligent
comment section. An RSS feed is available.

** *** ***** ******* *********** *************
In this issue:

   1. Interesting Attack on the EMV Smartcard Payment Standard
   2. Upcoming Speaking Engagements <#m_-4155995016286472592_cg2>
   3. Privacy Analysis of Ambient Light Sensors
   4. How the FIN7 Cybercrime Gang Operates <#m_-4155995016286472592_cg4>
   5. New Bluetooth Vulnerability <#m_-4155995016286472592_cg5>
   6. Matt Blaze on OTP Radio Stations <#m_-4155995016286472592_cg6>
   7. Nihilistic Password Security Questions <#m_-4155995016286472592_cg7>
   8. Former NSA Director Keith Alexander Joins Amazon's Board of Directors
   9. Amazon Delivery Drivers Hacking Scheduling System
   10. Interview with the Author of the 2000 Love Bug Virus
   11. Documented Death from a Ransomware Attack
   12. Iranian Government Hacking Android <#m_-4155995016286472592_cg12>
   13. CEO of NS8 Charged with Securities Fraud
   14. On Executive Order 12333 <#m_-4155995016286472592_cg14>
   15. Hacking a Coffee Maker <#m_-4155995016286472592_cg15>
   16. Negotiating with Ransomware Gangs <#m_-4155995016286472592_cg16>
   17. Detecting Deep Fakes with a Heartbeat <#m_-4155995016286472592_cg17>
   18. COVID-19 and Acedia <#m_-4155995016286472592_cg18>
   19. On Risk-Based Authentication <#m_-4155995016286472592_cg19>
   20. Swiss-Swedish Diplomatic Row Over Crypto AG
   21. New Privacy Features in iOS 14 <#m_-4155995016286472592_cg21>
   22. Hacking Apple for Profit <#m_-4155995016286472592_cg22>
   23. Google Responds to Warrants for "About" Searches

** *** ***** ******* *********** *************
Interesting Attack on the EMV Smartcard Payment Standard

It’s complicated <https://arxiv.org/pdf/2006.08249.pdf>, but it’s basically
a man-in-the-middle attack that involves two smartphones. The first phone
reads the actual smartcard, and then forwards the required information to a
second phone. That second phone actually conducts the transaction on the
POS terminal. That second phone is able to convince the POS terminal to
conduct the transaction without requiring the normally required PIN.

>From a news article

The researchers were able to demonstrate that it is possible to exploit the
vulnerability in practice, although it is a fairly complex process. They
first developed an Android app and installed it on two NFC-enabled mobile
phones. This allowed the two devices to read data from the credit card chip
and exchange information with payment terminals. Incidentally, the
researchers did not have to bypass any special security features in the
Android operating system to install the app.

To obtain unauthorized funds from a third-party credit card, the first
mobile phone is used to scan the necessary data from the credit card and
transfer it to the second phone. The second phone is then used to
simultaneously debit the amount at the checkout, as many cardholders do
nowadays. As the app declares that the customer is the authorized user of
the credit card, the vendor does not realize that the transaction is
fraudulent. The crucial factor is that the app outsmarts the card’s
security system. Although the amount is over the limit and requires PIN
verification, no code is requested.

The paper: “The EMV Standard: Break, Fix, Verify

*Abstract:* EMV is the international protocol standard for smartcard
payment and is used in over 9 billion cards worldwide. Despite the
standard’s advertised security, various issues have been previously
uncovered, deriving from logical flaws that are hard to spot in EMV’s
lengthy and complex specification, running over 2,000 pages.

We formalize a comprehensive symbolic model of EMV in Tamarin, a
state-of-the-art protocol verifier. Our model is the first that supports a
fine-grained analysis of all relevant security guarantees that EMV is
intended to offer. We use our model to automatically identify flaws that
lead to two critical attacks: one that defrauds the cardholder and another
that defrauds the merchant. First, criminals can use a victim’s Visa
contact-less card for high-value purchases, without knowledge of the card’s
PIN. We built a proof-of-concept Android application and successfully
demonstrated this attack on real-world payment terminals. Second, criminals
can trick the terminal into accepting an unauthentic offline transaction,
which the issuing bank should later decline, after the criminal has walked
away with the goods. This attack is possible for implementations following
the standard, although we did not test it on actual terminals for ethical
reasons. Finally, we propose and verify improvements to the standard that
prevent these attacks, as well as any other attacks that violate the
considered security properties.The proposed improvements can be easily
implemented in the terminals and do not affect the cards in circulation.

** *** ***** ******* *********** *************
Upcoming Speaking Engagements

This is a current list of where and when I am scheduled to speak:

   - I’m speaking at the Cybersecurity Law & Policy Scholars Virtual
   on September 17, 2020.
   - I’m keynoting the Canadian Internet Registration Authority’s online
   symposium, Canadians Connected
   <https://member.cira.ca/Events/CanadiansConnected/Events/About.aspx>, on
   Wednesday, September 23, 2020.
   - I’m giving a webinar as part of the Online One Conference 2020
   <https://one-conference.nl/> on September 29, 2020.
   - I’m speaking at the (ISC)² Security Congress 2020
   <https://www.isc2.org/Congress>, November 16-18, 2020.

The list is maintained on this page <https://www.schneier.com/events/>.

** *** ***** ******* *********** *************
Privacy Analysis of Ambient Light Sensors

Interesting privacy analysis
of the Ambient Light Sensor API. And a blog post
Especially note the “Lessons Learned” section.

** *** ***** ******* *********** *************
How the FIN7 Cybercrime Gang Operates

The Grugq has written an excellent essay
on how the Russian cybercriminal gang FIN7 operates. An excerpt:

The secret of FIN7’s success is their *operational art of cyber crime.*
They managed their resources and operations effectively, allowing them to
successfully attack and exploit hundreds of victim organizations. FIN7 was
not the most elite hacker group, but they developed a number of fascinating
innovations. Looking at the process triangle (people, process, technology),
their technology wasn’t sophisticated, but their people management and
business processes were.

Their business... is crime! And every business needs business goals, so I
wrote a mock FIN7 mission statement:

*Our mission is to proactively leverage existing long-term, high-impact
growth strategies so that we may deliver the kind of results on the bottom
line that our investors expect and deserve.*

How does FIN7 actualize this vision? This is CrimeOps:

   - Repeatable business process
   - CrimeBosses manage workers, projects, data and money.
   - CrimeBosses don’t manage technical innovation. They use incremental
   improvement to TTP to remain effective, but no more
   - Frontline workers don’t need to innovate (because the process is

** *** ***** ******* *********** *************
New Bluetooth Vulnerability

There’s a new unpatched Bluetooth vulnerability

The issue is with a protocol called Cross-Transport Key Derivation (or
CTKD, for short). When, say, an iPhone is getting ready to pair up with
Bluetooth-powered device, CTKD’s role is to set up two separate
authentication keys for that phone: one for a “Bluetooth Low Energy”
device, and one for a device using what’s known as the “Basic Rate/Enhanced
Data Rate” standard. Different devices require different amounts of data --
and battery power -- from a phone. Being able to toggle between the
standards needed for Bluetooth devices that take a ton of data (like a
Chromecast), and those that require a bit less (like a smartwatch) is more
efficient. Incidentally, it might also be less secure.

According to the researchers, if a phone supports both of those standards
but doesn’t require some sort of authentication or permission on the user’s
end, a hackery sort who’s within Bluetooth range can use its CTKD
connection to derive its own competing key. With that connection, according
to the researchers, this sort of erzatz authentication can also allow bad
actors to weaken the encryption that these keys use in the first place --
which can open its owner up to more attacks further down the road, or
perform “man in the middle” style attacks that snoop on unprotected data
being sent by the phone’s apps and services.

Another article

Patches are not immediately available at the time of writing. The only way
to protect against BLURtooth attacks is to control the environment in which
Bluetooth devices are paired, in order to prevent man-in-the-middle
attacks, or pairings with rogue devices carried out via social engineering
(tricking the human operator).

However, patches are expected to be available at one point. When they’ll
be, they’ll most likely be integrated as firmware or operating system
updates for Bluetooth capable devices.

The timeline for these updates is, for the moment, unclear, as device
vendors and OS makers usually work on different timelines, and some may not
prioritize security patches as others. The number of vulnerable devices is
also unclear and hard to quantify.

Many Bluetooth devices can’t be patched.

Final note: this seems to be another example of simultaneous discovery:

According to the Bluetooth SIG, the BLURtooth attack was discovered
independently by two groups of academics from the École Polytechnique
Fédérale de Lausanne (EPFL) and Purdue University.

** *** ***** ******* *********** *************
Matt Blaze on OTP Radio Stations

Matt Blaze discusses <https://www.mattblaze.org/blog/neinnines/> (also here
<https://twitter.com/mattblaze/status/1303769018411757569>) an interesting
mystery about a Cuban one-time-pad radio station, and a random number
generator error that probably helped arrest a pair of Russian spies in the

** *** ***** ******* *********** *************
Nihilistic Password Security Questions

Posted three years ago, but definitely appropriate for the times

** *** ***** ******* *********** *************
Former NSA Director Keith Alexander Joins Amazon's Board of Directors

This sounds like a bad idea

** *** ***** ******* *********** *************
Amazon Delivery Drivers Hacking Scheduling System

Amazon drivers -- all gig workers who don’t work for the company -- are hanging
cell phones in trees
near Amazon delivery stations, fooling the system into thinking that they
are closer than they actually are:

The phones in trees seem to serve as master devices that dispatch routes to
multiple nearby drivers in on the plot, according to drivers who have
observed the process. They believe an unidentified person or entity is
acting as an intermediary between Amazon and the drivers and charging
drivers to secure more routes, which is against Amazon’s policies.

The perpetrators likely dangle multiple phones in the trees to spread the
work around to multiple Amazon Flex accounts and avoid detection by Amazon,
said Chetan Sharma, a wireless industry consultant. If all the routes were
fed through one device, it would be easy for Amazon to detect, he said.

“They’re gaming the system in a way that makes it harder for Amazon to
figure it out,” Sharma said. “They’re just a step ahead of Amazon’s
algorithm and its developers.”

** *** ***** ******* *********** *************
Interview with the Author of the 2000 Love Bug Virus

No real surprises, but we finally have the story

The story he went on to tell is strikingly straightforward. De Guzman was
poor, and internet access was expensive. He felt that getting online was
almost akin to a human right (a view that was ahead of its time). Getting
access required a password, so his solution was to steal the passwords from
those who’d paid for them. Not that de Guzman regarded this as stealing: He
argued that the password holder would get no less access as a result of
having their password unknowingly “shared.” (Of course, his logic
conveniently ignored the fact that the internet access provider would have
to serve two people for the price of one.)

De Guzman came up with a solution: a password-stealing program. In
hindsight, perhaps his guilt should have been obvious, because this was
almost exactly the scheme he’d mapped out in a thesis proposal that had
been rejected by his college the previous year.

** *** ***** ******* *********** *************
Documented Death from a Ransomware Attack

A Düsseldorf woman died
when a ransomware attack against a hospital forced her to be taken to a
different hospital in another city.

I think this is the first documented case of a cyberattack causing a
fatality. UK hospitals had to redirect patients during the 2017 WannaCry
ransomware attack
but there were no documented fatalities from that event.

The police are treating this as a homicide

** *** ***** ******* *********** *************
Iranian Government Hacking Android

The *New York Times* wrote about
a still-unreleased report from Check Point and the Miaan Group:

The reports, which were reviewed by The New York Times in advance of their
release, say that the hackers have successfully infiltrated what were
thought to be secure mobile phones and computers belonging to the targets,
overcoming obstacles created by encrypted applications such as Telegram
and, according to Miaan, even gaining access to information on WhatsApp.
Both are popular messaging tools in Iran. The hackers also have created
malware disguised as Android applications, the reports said.

It looks like the standard technique of getting the victim to open a
document or application.

** *** ***** ******* *********** *************
CEO of NS8 Charged with Securities Fraud

The founder and CEO of the Internet security company NS8
<https://www.ns8.com/en-us> has been arrested
and “charged in a Complaint in Manhattan federal court with securities
fraud, fraud in the offer and sale of securities, and wire fraud.”

I admit that I’ve never even heard of the company before.

** *** ***** ******* *********** *************
On Executive Order 12333

Mark Jaycox has written a long article on the US Executive Order 12333: “No
Oversight, No Limits, No Worries: A Primer on Presidential Spying and
Executive Order 12,333

*Abstract*: Executive Order 12,333 (“EO 12333”) is a 1980s Executive Order
signed by President Ronald Reagan that, among other things, establishes an
overarching policy framework for the Executive Branch’s spying powers.
Although electronic surveillance programs authorized by EO 12333 generally
target foreign intelligence from foreign targets, its permissive targeting
standards allow for the substantial collection of Americans’ communications
containing little to no foreign intelligence value. This fact alone
necessitates closer inspection.

This working draft conducts such an inspection by collecting and coalescing
the various declassifications, disclosures, legislative investigations, and
news reports concerning EO 12333 electronic surveillance programs in order
to provide a better understanding of how the Executive Branch implements
the order and the surveillance programs it authorizes. The Article pays
particular attention to EO 12333’s designation of the National Security
Agency as primarily responsible for conducting signals intelligence, which
includes the installation of malware, the analysis of internet traffic
traversing the telecommunications backbone, the hacking of U.S.-based
companies like Yahoo and Google, and the analysis of Americans’
communications, contact lists, text messages, geolocation data, and other

After exploring the electronic surveillance programs authorized by EO
12333, this Article proposes reforms to the existing policy framework,
including narrowing the aperture of authorized surveillance, increasing
privacy standards for the retention of data, and requiring greater
transparency and accountability.

EDITED TO ADD (10/12): Good *New York Times* article
from 1983 on EO 12333, pointing out that Congress had never limited its
power. It still hasn’t.

And a related article
on the FISA Court.

** *** ***** ******* *********** *************
Hacking a Coffee Maker

As expected, IoT devices are filled with vulnerabilities

As a thought experiment, Martin Hron, a researcher at security company
Avast, reverse engineered one of the older coffee makers to see what kinds
of hacks he could do with it. After just a week of effort, the unqualified
answer was: quite a lot. Specifically, he could trigger the coffee maker to
turn on the burner, dispense water, spin the bean grinder, and display a
ransom message, all while beeping repeatedly. Oh, and by the way, the only
way to stop the chaos was to unplug the power cord.


In any event, Hron said the ransom attack is just the beginning of what an
attacker could do. With more work, he believes, an attacker could program a
coffee maker -- and possibly other appliances made by Smarter -- to attack
the router, computers, or other devices connected to the same network. And
the attacker could probably do it with no overt sign anything was amiss.

** *** ***** ******* *********** *************
Negotiating with Ransomware Gangs

Really interesting conversation
<https://redtape.substack.com/p/whats-it-really-like-to-negotiate> with
someone who negotiates with ransomware gangs:

For now, it seems that paying ransomware, while obviously risky and
empowering/encouraging ransomware attackers, can perhaps be comported so as
not to break any laws (like anti-terrorist laws, FCPA, conspiracy and
others) and even if payment is arguably unlawful, seems unlikely to be
prosecuted. Thus, the decision whether to pay or ignore a ransomware
demand, seems less of a legal, and more of a practical, determination
almost like a cost-benefit analysis.

The arguments for rendering a ransomware payment include:

   - Payment is the least costly option;
   - Payment is in the best interest of stakeholders (e.g. a hospital
   patient in desperate need of an immediate operation whose records are
   locked up);
   - Payment can avoid being fined for losing important data;
   - Payment means not losing highly confidential information; and
   - Payment may mean not going public with the data breach.

The arguments against rendering a ransomware payment include:

   - Payment does not guarantee that the right encryption keys with the
   proper decryption algorithms will be provided;
   - Payment further funds additional criminal pursuits of the attacker,
   enabling a cycle of ransomware crime;
   - Payment can do damage to a corporate brand;
   - Payment may not stop the ransomware attacker from returning;
   - If victims stopped making ransomware payments, the ransomware revenue
   stream would stop and ransomware attackers would have to move on to
   perpetrating another scheme; and
   - Using Bitcoin to pay a ransomware attacker can put organizations at
   risk. Most victims must buy Bitcoin on entirely unregulated and
   free-wheeling exchanges that can also be hacked, leaving buyers’ bank
   account information stored on these exchanges vulnerable.

When confronted with a ransomware attack, the options all seem bleak. Pay
the hackers and the victim may not only prompt future attacks, but there is
also no guarantee that the hackers will restore a victim’s dataset. Ignore
the hackers and the victim may incur significant financial damage or even
find themselves out of business. The only guarantees during a ransomware
attack are the fear, uncertainty and dread inevitably experienced by the

** *** ***** ******* *********** *************
Detecting Deep Fakes with a Heartbeat

Researchers can detect deep fakes
because they don’t convincingly mimic human blood circulation in the face:

In particular, video of a person’s face contains subtle shifts in color
that result from pulses in blood circulation. You might imagine that these
changes would be too minute to detect merely from a video, but viewing videos
that have been enhanced
<https://www.youtube.com/watch?time_continue=4&v=ONZcjs1Pjmk> to exaggerate
these color shifts will quickly disabuse you of that notion. This
phenomenon forms the basis of a technique called photoplethysmography, or
PPG for short, which can be used, for example, to monitor newborns
<https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6308706/> without having to
attach anything to a their very sensitive skin.

Deep fakes don’t lack such circulation-induced shifts in color, but they
don’t recreate them with high fidelity. The researchers at SUNY and Intel
found that “biological signals are not coherently preserved in different
synthetic facial parts” and that “synthetic content does not contain frames
with stable PPG.” Translation: Deep fakes can’t convincingly mimic how your
pulse shows up in your face.

The inconsistencies in PPG signals found in deep fakes provided these
researchers with the basis for a deep-learning system of their own, dubbed
FakeCatcher, which can categorize videos of a person’s face as either real
or fake with greater than 90 percent accuracy. And these same three
researchers followed this study with another demonstrating that this
approach can be applied not only to revealing that a video is fake, but
also to show what software was used to create it.

Of course, this is an arms race. I expect deep fake programs to become good
enough to fool FakeCatcher in a few months.

** *** ***** ******* *********** *************
COVID-19 and Acedia

This isn’t my usual essay topic. Still, I want to put it on my blog.*

Six months into the pandemic with no end in sight, many of us have been
feeling a sense of unease that goes beyond anxiety or distress. It’s a
nameless feeling that somehow makes it hard to go on with even the nice
things we regularly do.

What’s blocking our everyday routines is not the anxiety of lockdown
adjustments, or the worries about ourselves and our loved ones -- real
though those worries are. It isn’t even the sense that, if we’re really
honest with ourselves, much of what we do is pretty self-indulgent when
held up against the urgency of a global pandemic.

It is something more troubling and harder to name: an uncertainty about why
we would go on doing much of what for years we’d taken for granted as
inherently valuable.

What we are confronting is something many writers in the pandemic have
approached from varying angles: a restless distraction that stems not just
from not knowing when it will all end, but also from not knowing what that
end will look like. Perhaps the sharpest insight
into this feeling has come from Jonathan Zecher, a historian of religion,
who linked it to the forgotten Christian term: acedia.

Acedia was a malady that apparently plagued many medieval monks. It’s a
sense of no longer caring about caring, not because one had become
apathetic, but because somehow the whole structure of care had become
jammed up.

What could this particular form of melancholy mean in an urgent global
crisis? On the face of it, all of us care very much about the health risks
to those we know and don’t know. Yet lurking alongside such immediate cares
is a sense of dislocation that somehow interferes with how we care.

The answer can be found in an extreme thought experiment about death. In
2013, philosopher Samuel Scheffler explored
a core assumption about death. We all assume that there will be a future
world that survives our particular life, a world populated by people
roughly like us, including some who are related to us or known to us.
Though we rarely or acknowledge it, this presumed future world is the
horizon towards which everything we do in the present is oriented.

But what, Scheffler asked, if we lose that assumed future world -- because,
say, we are told that human life will end on a fixed date not far after our
own death? Then the things we value would start to lose their value. Our
sense of why things matter today is built on the presumption that they will
continue to matter in the future, even when we ourselves are no longer
around to value them.

Our present relations to people and things are, in this deep way,
future-oriented. Symphonies are written, buildings built, children
conceived in the present, but always with a future in mind. What happens to
our ethical bearings when we start to lose our grip on that future?

It’s here, moving back to the particular features of the global pandemic,
that we see more clearly what drives the restlessness and dislocation so
many have been feeling. The source of our current acedia is not the literal
loss of a future; even the most pessimistic scenarios surrounding COVID-19
have our species surviving. The dislocation is more subtle: a disruption in
pretty much every future frame of reference on which just going on in the
present relies.

Moving around is what we do as creatures, and for that we need horizons.
COVID-19 has erased many of the spatial and temporal horizons we rely on,
even if we don’t notice them very often. We don’t know how the economy will
look, how social life will go on, how our home routines will be changed,
how work will be organized, how universities or the arts or local commerce
will survive.

What unsettles us is not only fear of change. It’s that, if we can no
longer trust in the future, many things become irrelevant, retrospectively
pointless. And by that we mean from the perspective of a future whose basic
shape we can no longer take for granted. This fundamentally disrupts how we
weigh the value of what we are doing right now. It becomes especially hard
under these conditions to hold on to the value in activities that, by their
very nature, are future-directed, such as education or institution-building.

That’s what many of us are feeling. That’s today’s acedia.

Naming this malaise may seem more trouble than its worth, but the opposite
is true. Perhaps the worst thing about medieval acedia was that monks
struggled with its dislocation in isolation. But today’s disruption of our
sense of a future must be a shared challenge. Because what’s disrupted is
the structure of care that sustains why we go on doing things together, and
this can only be repaired through renewed solidarity.

Such solidarity, however, has one precondition: that we openly discuss the
problem of acedia, and how it prevents us from facing our deepest future
uncertainties. Once we have done that, we can recognize it as a problem we
choose to face together -- across political and cultural lines -- as
families, communities, nations and a global humanity. Which means doing so
in acceptance of our shared vulnerability, rather than suffering each on
our own.

This essay was written with Nick Couldry, and previously appeared
on CNN.com.

** *** ***** ******* *********** *************
On Risk-Based Authentication

Interesting usability study: “More Than Just Good Passwords? A Study on
Usability and Security Perceptions of Risk-based Authentication

*Abstract*: Risk-based Authentication (RBA) is an adaptive security measure
to strengthen password-based authentication. RBA monitors additional
features during login, and when observed feature values differ
significantly from previously seen ones, users have to provide additional
authentication factors such as a verification code. RBA has the potential
to offer more usable authentication, but the usability and the security
perceptions of RBA are not studied well.

We present the results of a between-group lab study (n=65) to evaluate
usability and security perceptions of two RBA variants, one 2FA variant,
and password-only authentication. Our study shows with significant results
that RBA is considered to be more usable than the studied 2FA variants,
while it is perceived as more secure than password-only authentication in
general and comparably se-cure to 2FA in a variety of application types. We
also observed RBA usability problems and provide recommendations for
mitigation.Our contribution provides a first deeper understanding of the
users’perception of RBA and helps to improve RBA implementations for a
broader user acceptance.

Paper’s website <https://riskbasedauthentication.org/usability/perceptions/>.
I’ve blogged about risk-based authentication

** *** ***** ******* *********** *************
Swiss-Swedish Diplomatic Row Over Crypto AG

Previously I have written
<https://www.schneier.com/blog/archives/2020/02/crypto_ag_was_o.html> about
<https://www.schneier.com/blog/archives/2020/03/more_on_crypto_.html> the
Swedish-owned Swiss-based cryptographic hardware company: Crypto AG. It was
a CIA-owned Cold War operation for decades. Today it is called Crypto
International <https://www.crypto.ch/en>, still based in Switzerland but
owned by a Swedish company.

It’s back in the news

Late last week, Swedish Foreign Minister Ann Linde said she had canceled a
meeting with her Swiss counterpart Ignazio Cassis slated for this month
after Switzerland placed an export ban on Crypto International
<https://www.crypto.ch/en>, a Swiss-based and Swedish-owned cybersecurity

The ban was imposed while Swiss authorities examine long-running and
explosive claims that a previous incarnation of Crypto International,
Crypto AG, was little more than a front for U.S. intelligence-gathering
during the Cold War.

Linde said the Swiss ban was stopping “goods” -- which experts suggest
could include cybersecurity upgrades or other IT support needed by Swedish
state agencies -- from reaching Sweden.

She told public broadcaster SVT
that the meeting with Cassis was “not appropriate right now until we have
fully understood the Swiss actions.”

EDITED TO ADD (10/13): Lots of information
<https://www.cryptomuseum.com/intel/cia/rubicon.htm> on Crypto AG.

** *** ***** ******* *********** *************
New Privacy Features in iOS 14

A good rundown

** *** ***** ******* *********** *************
Hacking Apple for Profit

Five researchers hacked
Apple Computer’s networks -- not their products -- and found fifty-five
vulnerabilities. So far, they have received $289K.

One of the worst of all the bugs they found would have allowed criminals to
create a worm that would automatically steal all the photos, videos, and
documents from someone’s iCloud account and then do the same to the
victim’s contacts.

Lots of details in this blog post <https://samcurry.net/hacking-apple/> by
one of the hackers.

** *** ***** ******* *********** *************
Google Responds to Warrants for "About" Searches

One of the things we learned from the Snowden documents is that the NSA
conducts “about” searches. That is, searches based on activities and not
identifiers. A normal search would be on a name, or IP address, or phone
number. An about search would something like “show me anyone that has used
this particular name in a communications,” or “show me anyone who was at
this particular location within this time frame.” These searches are legal
when conducted for the purpose of foreign surveillance, but the worry about
using them domestically is that they are unconstitutionally broad. After
all, the only way to know who said a particular name is to know what
everyone said, and the only way to know who was at a particular location is
to know where everyone was. The very nature of these searches requires mass

The FBI does not conduct mass surveillance. But many US corporations do, as
a normal part of their business model. And the FBI uses that surveillance
infrastructure to conduct its own about searches. Here’s an arson case
where the FBI asked Google
<https://www.theregister.com/2020/10/09/google_search_arrest/> who searched
for a particular street address:

Homeland Security special agent Sylvette Reynoso testified that her team
began by asking Google to produce a list of public IP addresses used to
google the home of the victim in the run-up to the arson. The Chocolate
Factory [Google] complied with the warrant, and gave the investigators the
list. As Reynoso put it:

On June 15, 2020, the Honorable Ramon E. Reyes, Jr., United States
Magistrate Judge for the Eastern District of New York, authorized a search
warrant to Google for users who had searched the address of the Residence
close in time to the arson.

The records indicated two IPv6 addresses had been used to search for the
address three times: one the day before the SUV was set on fire, and the
other two about an hour before the attack. The IPv6 addresses were traced
to Verizon Wireless, which told the investigators that the addresses were
in use by an account belonging to Williams.

Google’s response is that this is rare:

While word of these sort of requests for the identities of people making
specific searches will raise the eyebrows of privacy-conscious users,
Google told *The Register* the warrants are a very rare occurrence, and its
team fights overly broad or vague requests.

“We vigorously protect the privacy of our users while supporting the
important work of law enforcement,” Google’s director of law enforcement
and information security Richard Salgado told us. “We require a warrant and
push to narrow the scope of these particular demands when overly broad,
including by objecting in court when appropriate.

“These data demands represent less than one per cent of total warrants and
a small fraction of the overall legal demands for user data that we
currently receive.”

Here’s another example
of what seems to be about data leading to a false arrest.

According to the lawsuit, police investigating the murder knew months
before they arrested Molina that the location data obtained from Google
often showed him in two places at once, and that he was not the only person
who drove the Honda registered under his name.

Avondale police knew almost two months before they arrested Molina that
another man his stepfather sometimes drove Molina’s white Honda. On October
25, 2018, police obtained records showing that Molina’s Honda had been
impounded earlier that year after Molina’s stepfather was caught driving
the car without a license.

Data obtained by Avondale police from Google did show that a device logged
into Molina’s Google account was in the area at the time of Knight’s
murder. Yet on a different date, the location data from Google also showed
that Molina was at a retirement community in Scottsdale (where his mother
worked) while debit card records showed that Molina had made a purchase at
a Walmart across town at the exact same time.

Molina’s attorneys argue that this and other instances like it should have
made it clear to Avondale police that Google’s account-location data is not
always reliable in determining the actual location of a person.

“About” searches might be rare, but that doesn’t make them a good idea. We
have knowingly and willingly built the architecture of a police state, just
so companies can show us ads. (And it is increasingly apparent
<https://us.macmillan.com/books/9780374538651> that the
advertising-supported Internet is heading for a crash.)

** *** ***** ******* *********** *************

Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing
summaries, analyses, insights, and commentaries on security technology. To
subscribe, or to read back issues, see Crypto-Gram's web page

You can also read these articles on my blog, Schneier on Security

Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues
and friends who will find it valuable. Permission is also granted to
reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.

Bruce Schneier is an internationally renowned security technologist, called
a security guru by the Economist. He is the author of over one dozen books
-- including his latest, We Have Root <https://www.schneier.com/books/root/>
-- as well as hundreds of articles, essays, and academic papers. His
newsletter and blog are read by over 250,000 people. Schneier is a fellow
at the Berkman Klein Center for Internet & Society at Harvard University; a
Lecturer in Public Policy at the Harvard Kennedy School; a board member of
the Electronic Frontier Foundation, AccessNow, and the Tor Project; and an
Advisory Board Member of the Electronic Privacy Information Center and
VerifiedVoting.org. He is the Chief of Security Architecture at Inrupt, Inc.

Copyright © 2020 by Bruce Schneier.

** *** ***** ******* *********** *************

Mailing list hosting graciously provided by MailChimp
<https://mailchimp.com/>. Sent without web bugs or link tracking.

This email was sent to: sondheim at panix.com
*You are receiving this email because you subscribed to the Crypto-Gram

unsubscribe from this list
    update subscription preferences
Bruce Schneier · Harvard Kennedy School · 1 Brattle Square · Cambridge, MA
02138 · USA


*directory http://www.alansondheim.org <http://www.alansondheim.org> tel
718-813-3285**email sondheim ut panix.com <http://panix.com>, sondheim ut
gmail.com <http://gmail.com>*
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.netbehaviour.org/pipermail/netbehaviour/attachments/20201015/e3d8e42a/attachment.htm>

More information about the NetBehaviour mailing list