Face Recognition
Tech Goes
On
Trial
NIMESH PATEL, AGGRIEVED USER OF
Facebook and Illinois resident, isn’t naive:
He well understands that the social networking
company collects information
about him. But Facebook went too far for
his liking when it collected certain intimate
details about his physiognomy, such as how
many millimeters of skin lie between his
eyebrows, how far the corners of his mouth
extend across his cheeks, and dozens of
other aspects of his facial geometry that enable
the company’s face
recognition
software to
identify
him.
Patel
is a named plaintiff
in
a class-action lawsuit
against
Facebook alleging
that
the company’s use of
face
recognition technology
violates
an Illinois law
passed
in 2008. The Biometric
Information
Privacy
Act
(BIPA) sets limits
on
how companies can
store
and use people’s biometric
identifiers,
which
the
law defines as fingerprints,
voiceprints,
retina
or
iris scans, and scans
of
hand or face geometry.
The
case is scheduled for
trial
this October, and similar
Illinois-based
lawsuits
are
proceeding against
Google
and Snapchat. In
the
upcoming year, the
courts
will host a debate
over
who can keep our
faces
on file.
Civil
liberties groups
say
that debate is long
overdue.
The Illinois law
is
a weird outlier in the
United
States, where face
recognition
is increasingly
being
integrated into
surveillance
systems and
law
enforcement databases.
The
technology
has
rapidly improved in
recent
years, says Jennifer
Lynch,
an attorney with
the
Electronic Frontier
Foundation,
and regulations
haven’t
kept pace.
“We
could soon have security
cameras
in stores that
identify
people as they
shop,”
she says.
The
case against
Facebook
hinges on a
handy
photo-tagging fea- ture introduced in 2010:
When
a user uploads a
photo,
Facebook’s system
automatically
picks out
any
faces in the shot, tries
to
match those faces to
people
it’s seen in photos
before,
and offers up the
names
of any friends it has
identified.
According to
the
lawsuit, this “tag suggestion”
system
proves
that
Facebook collects and
stores
“face templates” for
its
American users. (The
company
turned off this
feature
in Europe in 2012
over
privacy concerns.)
The
Illinois law predates
Facebook’s
introduction
of
the tag-suggestion feature
and
doesn’t mention
social
networks. Instead,
BIPA
cites the potential
use
of biometric IDs in
financial
transactions,
and
notes that these identifiers
differ
significantly
from
PIN codes and passwords—
if
customers’ biometric
IDs
are stolen by
hackers,
they can’t be
issued
new fingerprints or
faces.
But the class-action
lawyers
who have recently
seized
on the law aren’t
going
after banks; they’re
targeting
tech companies.
Yet
another lawsuit,
settled
in April 2016 for
an
undisclosed sum, took
aim
at the photo storage
site
Shutterfly.
Under
BIPA, private
companies
must develop
written
policies stating
how
long they will retain
people’s
biometric information
and
when they will
permanently
destroy that
data.
“In a way, this is a
modest
law,” says Claire
Gartland,
an attorney who
works
on consumer privacy
issues
at EPIC, the
Electronic
Privacy Information
Center.
“It just
requires
a disclaimer to
the
consumer.”
By
maintaining a database
of
Illinois users’ face
templates
without a written
policy
in place, the suit
says,
Facebook has violated
the
law. A Facebook
spokesperson
declined to
answer
questions about
the
lawsuit, but notes that
users
can easily turn off
the
tag-suggestion feature
for
their accounts.
The
legal wrangling
has
already begun. In
late
2015 the company
filed
a motion to dismiss
based
on its interpretation
of
BIPA’s list of biometric
identifiers,
which
includes
face scans and
face
geometries yet explicitly
excludes
photographs
and
physical descriptions.
Facebook
argued that the
law
refers only to physical
face
scanners that create
biometric
records based
on
flesh-and-blood faces.
But
the court called Facebook’s
argument
“unpersuasive,”
saying
that
the
law was intended to
address
all emerging biometric
technologies,
and
allowed
the suit to move
forward.
If Facebook loses
the
case, the company
could
be forced to pay
damages
to millions of
Illinois
users and change
its
policies in that state—or,
more
practically, throughout
the
United States.
In
the courtroom, it’s
quite
possible that the
technical
aspects of Facebook’s
face
recognition
technology
will come
into
play. The courts may
need
to know whether the
company
uses the conventional
approach
to
face-matching
software,
says
biometrics expert
Anil
Jain, a professor of
computer
science and
engineering
at Michigan
State
University. Such systems
build
and store face
templates
based on thousands
of
measurements:
“They
extract landmark
points
by sampling across
the
contours of the face,
the
eyebrows, the nose,
the
points along the lips,
the
two ends of the mouth,
and
so forth,” he says.
But
Jain notes that
Facebook
researchers pioneered
a
new approach
to
face recognition that
relies
on machine learning,
introducing
their
DeepFace
system in a
2014
paper. In the report,
the
researchers describe
training
their system using
a
data set of 4.4 million
labeled
faces drawn from
Facebook
photographs.
The
system’s deep neural
network
examined the
faces
based on millions of
parameters,
and derived
its
face-matching rules
based
on whatever mysterious
lessons
it learned.
“It’s
more like a black box,”
Jain
says.
Facebook
won’t say
whether
it now uses
DeepFace,
or something
like
it, for its standard tagsuggestion
feature.
If the
company
does employ this
advanced
method, however,
its
current technology
might
not violate the
letter
of the law. “The
question
is what they store
in
the database,” explains
Jain.
As the DeepFace program
analyzes
raw photographs,
the
system might
simply
hold on to the analytic
rules
it has learned,
and
might not bother to
store
face templates that
count
as biometric identifiers.
Therein
lies the irony:
If
Facebook doesn’t save
faces
in its database, it
may
save face in court.
No comments:
Post a Comment