Sitting Down with Rob Jenkins
Rob had some very insightful, innovative answers to my questions, and I'm excited to share them with the growing IHF readership. Going forward, I'm hoping to have other thought leaders and readers contribute content and commentary to this blog, as I'd like to make this more of a forum for biometric, facial recognition and other technology discussions rather than a one-sided conversation. Feel free to make comments on any of the responses or questions, and I will be sure to address them!
Also, check out Rob's departmental Web site for selected publications on gaze perception and other facial identification topics. Very interesting stuff.
In response to Manchester Airport lowering their matching thresholds, The Telegraph quoted you saying that lowering the passport match level to 30 percent would make the system almost worthless. Another perspective is that the previous levels were causing horrendous queues and customer dissatisfaction. Is there a middle ground here?
There is certainly a middle ground in the sense that we can choose where to strike a balance between rejecting genuine matches and accepting false matches. But reducing either type of error generally increases the other, so it’s a trade-off. There is no ‘sweet spot’ where both types of error are reined in.
Despite the advanced nature of this technology, do you believe that there should still be a human element involved in security checks? If so, do you believe we will ever reach a point where this will no longer be necessary?
The main problem with referring the difficult cases to humans is that humans cannot do the task reliably either - even if we’re trained and experienced. Humans are fantastic at matching familiar faces, but our performance with unfamiliar faces is very poor. If we can somehow incorporate the benefits of familiarity into the technology, then it could be transformed.
Facial recognition technologies are popping up all over -- club entrances, bathroom faucets, online photo services, using cameras in lieu of passwords to access computers -- have they hit the tipping point? Is it only time before we use the technology to unlock our front doors and open our car trunks? What trajectory do you see it taking? Staying in security-based deployments, infiltrating everyday life or a balance between the two?
To some extent I think a tipping point is being ushered in, mostly by people who have something to sell. And it is an idea that some sectors are keen to buy into. So in that sense there is a lot of good will wishing the technology to work. I don’t find the gadget market especially troubling, provided that errors are of relatively little consequence. The real danger is in rushing to large-scale security deployments. For applications such as passport control or forensic face recognition the stakes can be much higher, and we know that the available technology is not yet up to the task.
In the same vein, has facial recognition reached a point where accuracy and reliability now line up with the media's expectations?
In my experience, identification errors tend not to go down well with the public. I often ask audiences how often they would be prepared to be the subject of a misidentification. The answers are in the order of once a decade, even when the imagined consequences are minor. That’s a tall order, given the number of identity checks that some proposals entail. It comes as something of a shock when these demands are compared against current capability. As far as media expectations are concerned, I think there has been a change in tone. Traditionally, the emphasis has been on the implications face recognition for privacy, with the unspoken assumption that it is reliable. These days there is more of an awareness that the technology simply is being phased in, whether it works or not. That changes the focus of the debate.
The "Big Brother" argument -- that citizens are losing their individual privacy rights due to increased public security efforts -- is always present in a discussion about surveillance. Is there a point at which facial recognition and biometric technology infringe on personal freedoms and the right to privacy? Is blurring faces enough? Are there places where surveillance should not be allowed?
I don’t think facial recognition and biometric technology necessarily infringe on privacy. It is certainly possible to imagine applications where privacy concerns don’t arise. However, for the security and surveillance applications that have been at the forefront of public discussion, the tension with privacy is fundamental. The whole purpose of identifying someone is to connect them with some other information, and the nature of that information is a major issue. We can think of face recognition as a key to identity. But focusing on the key tends to distract us from other questions, like What’s behind the lock? As more and more information is stored behind the lock, the reliability of the key becomes increasingly important. As does the question of who has access to the key.
The practice of blurring or pixellating faces to protect identity (as in Google Streetview) is often poorly informed. Although such manipulations can make it more difficult for observers to identify people, this is only the case when the observer is unfamiliar with the faces concerned. When the observer is familiar with the face, blurring or pixellating the image does surprisingly little to impede identification.
People have very different ideas about where surveillance should be allowed, and which places should be out of bounds. I don’t really foresee any wide agreement on the extent of coverage that is desirable or acceptable. The general trend is for rapid expansion, especially in the US and the UK, but my impression is that this trend is not driven by public demand.
The UK has over 4 million cameras -- that's one for every 14 people in the country and 200,000 in London alone. Chicago is working to improve its 'Virtual Shield' and include the entire metropolitan area in its surveillance grid to cut down crime. Yet, criminals still often get away with murder -- literally. Are expectations set too high? Are surveillance grids more of a scare tactic in preventing crime from happening rather than proactive in catching criminals in the act?
It has been known for some time that the unprecedented CCTV coverage in the UK has had little or no effect on crime rates. A recent Home Office report revealed that only 3% of crimes were solved using CCTV footage, and suggests that simple improvements to street lighting would be more effective. Part of the problem is that it is unrealistic for police to monitor CCTV footage on the scale that it is produced. But more importantly, little thought has gone into the use of CCTV evidence in court. It has only recently become clear how poor humans are at matching unfamiliar faces, even when the images are far higher quality than could be obtained from CCTV. We’ve already looked at machine performance in this context. Establishing a match that will stand up in court is very difficult indeed.
The deterrent argument is interesting because the figures imply little or no deterrent value in CCTV. The standard explanation for this is that people assume the cameras are not working, which is a reasonable inference to make if they are not reducing crime. However, I wonder if there is also a paradoxical effect of increasing coverage. After all, the more cameras there are, the less likely it is that any particular camera is being monitored.
Labels: Airports, Big Brother, Biometrics, Chicago, Face.com, Facial Recognition, Facial Scanners, London, Operation Virtual Shield, Privacy, Rob Jenkins, Surveillance, United Kingdom
0 Comments:
Post a Comment
Subscribe to Post Comments [Atom]
<< Home