How closely is Google really reading your e-mail?


Internet service providers and e-mail providers, such as Google, are sometimes put in a difficult position. (Georges Gobet/AFP/Getty Images)
Most users know that Google routinely uses software to scan the contents of e-mails, including images, to feed its advertising and to identify malware. But many may not have been aware that the company is also scanning users' accounts looking for illegal activity -- namely, matching images in e-mails against its known database of illegal and pornographic images of children.

That bit of Google policy came to light last week, when a Houston man was arrested on charges of having and promoting child pornography after Google told the National Center for Missing and Exploited Children that he had the images in his Gmail account. The tipoff, according to a report fromHouston television channel KHOU, led to the man's arrest.
While it's hard to argue with the outcome of this particular case, the news did raise some alarm bells among researchers at the security firm Sophos,who questioned whether Google was stepping outside its place as a company and into the role of a pseudo law enforcement agency.
Chester Wisniewski, a senior security researcher at Sophos, said that Google's "proactive" decision to tip off law enforcement makes "some of us wonder if they're crossing the line."
Many security firms, including Sophos, occasionally find themselves in a situation where they come across child pornography among the files they scan for clients, Wisniewski said. And in those cases, those companies report those images to the police. But the difference, he said, is that Sophos and other companies don't actively go looking for these images in their routine scans, as Google appears to have done with the software it uses on customers' e-mails in the course of routine scanning for ad keywords and malicious software.
Google declined to comment on this specific case but pointed to a June 2013 column in a British paper, the Telegraph, that outlined the steps Google and other major tech firms such as Microsoft take to identify graphic images of children and report users who share those images.
In that column, Google lead counsel David Drummond said that it is up to Google and other firms such as Microsoft, Yahoo and Apple to ensure that when "people try to share this disgusting content they are reported and prosecuted."
In many other cases, Google has taken a hard line against bowing to government requests for information without due process such a warrant or court order -- but not, Drummond said, in this case.
""oogle is in the business of making information widely available, and we’ve always supported freedom of expression," Drummond wrote in the column. "But there can be no free speech when it comes to images of child sexual abuse."
Drummond and Google are by no means alone in drawing that line. Laws that increase government power to access data or censor Internet traffic often focus on cases that deal with child pornography or images of child abuse because it is such an extreme example of the line companies must walk when evaluating when they should share customer data with law enforcement officers without a warrant.
A recent Supreme Court case in Canada centered on a man who was found to be downloading child pornography and addressed whether Internet service providers should give up the identifying Internet protocol numbers to law enforcement without a warrant. In that case, the court ruled against providing the information, drawing praise from privacy advocates and criticism from law enforcement agencies, who called the decision a setback, because of the additional time it will take for them to get warrants.
The Texas case is triggering a similar debate in the United States over what role the companies -- companies with whom we share our most private thoughts -- should play in law enforcement. And, Wiesniewski said, it should also stand as yet another reminder that nothing we do online is truly private and that companies are well within their legal rights to use consumer information as they see fit.
There's nothing wrong with what Google's doing, aside from what we as a society might be concerned about it," Wisniewski said. 
Update: After initially declining to comment, Google sent the following statement to The Washington Post -
“Sadly all Internet companies have to deal with child sexual abuse.  It’s why Google actively removes illegal imagery from our services -- including search and Gmail -- and immediately reports abuse to NCMEC.  This evidence is regularly used to convict criminals.  Each child sexual abuse image is given a unique digital fingerprint which enables our systems to identify those pictures, including in Gmail.  It is important to remember that we only use this technology to identify child sexual abuse imagery, not other email content that could be associated with criminal activity (for example using email to plot a burglary).”

Comments

Popular posts from this blog

El-Rufai’s Son Killed In Auto Crash

Kim Kardashian blasts Kendall Jenner – “I bought her a F***ING career!”

Billy Bob Thornton Denies Sleeping With Amber Heard