You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
Obsidian/00.03 News/A Dad Took Photos of His Na...

203 lines
20 KiB

This file contains ambiguous Unicode characters!

This file contains ambiguous Unicode characters that may be confused with others in your current locale. If your use case is intentional and legitimate, you can safely ignore this warning. Use the Escape button to highlight these characters.

---
dg-publish: true
Alias: [""]
Tag: ["Tech", "CSSA", "FANGAbuse"]
Date: 2022-08-21
DocType: "WebClipping"
Hierarchy:
TimeStamp: 2022-08-21
Link: https://www.nytimes.com/2022/08/21/technology/google-surveillance-toddler-photo.html
location:
CollapseMetaTable: Yes
---
Parent:: [[@News|News]]
Read:: [[2022-08-21]]
---
 
```button
name Save
type command
action Save current file
id Save
```
^button-ADadTookPhotosofHisNakedToddlerfortheDoctorNSave
 
# A Dad Took Photos of His Naked Toddler for the Doctor. Google Flagged Him as a Criminal.
![Mark with his son this month.](https://static01.nyt.com/images/2022/08/17/business/00Google-Photo-lede/merlin_211189338_dc79ba5b-75ab-45a5-9531-27efa7093714-articleLarge.jpg?quality=75&auto=webp&disable=upscale)
Credit...Aaron Wojack for The New York Times
Google has an automated tool to detect abusive images of children. But the system can get it wrong, and the consequences are serious.
Mark with his son this month.Credit...Aaron Wojack for The New York Times
- Aug. 21, 2022
Mark noticed something amiss with his toddler. His sons penis looked swollen and was hurting him. Mark, a stay-at-home dad in San Francisco, grabbed his Android smartphone and took photos to document the problem so he could track its progression.
It was a Friday night in February 2021. His wife called an advice nurse at their health care provider to schedule an emergency consultation for the next morning, by video because it was a Saturday and there was a pandemic going on. The nurse said to send photos so the doctor could review them in advance.
Marks wife grabbed her husbands phone and texted a few high-quality close-ups of their sons groin area to her iPhone so she could upload them to the health care providers messaging system. In one, Marks hand was visible, helping to better display the swelling. Mark and his wife gave no thought to the tech giants that made this quick capture and exchange of digital data possible, or what those giants might think of the images.
With help from the photos, the doctor diagnosed the issue and prescribed antibiotics, which quickly cleared it up. But the episode left Mark with a much larger problem, one that would cost him more than a decade of contacts, emails and photos, and make him the target of a police investigation. Mark, who asked to be identified only by his first name for fear of potential reputational harm, had been caught in an algorithmic net designed to snare people exchanging child sexual abuse material.
Because technology companies routinely capture so much data, they have been pressured to act as sentinels, examining what passes through their servers to detect and prevent criminal behavior. Child advocates say the companies cooperation is essential to combat the rampant online spread of [sexual abuse imagery](https://www.nytimes.com/interactive/2019/09/28/us/child-sex-abuse.html). But it can entail peering into private archives, such as digital photo albums — an intrusion users may not expect — that has cast innocent behavior in a sinister light in at least two cases The Times has unearthed.
Jon Callas, a technologist at the Electronic Frontier Foundation, a digital civil liberties organization, called the cases canaries **“**in this particular coal mine.”
“There could be tens, hundreds, thousands more of these,” he said.
Given the toxic nature of the accusations, Mr. Callas speculated that most people wrongfully flagged would not publicize what had happened.
“I knew that these companies were watching and that privacy is not what we would hope it to be,” Mark said. “But I havent done anything wrong.”
The police agreed. Google did not.
## A Severe Violation
After setting up a Gmail account in the mid-aughts, Mark, who is in his 40s, came to rely heavily on Google. He synced appointments with his wife on Google Calendar. His Android smartphone camera backed up his photos and videos to the Google cloud. He even had a phone plan with Google Fi.
Two days after taking the photos of his son, Marks phone made a blooping notification noise: His account had been disabled because of “harmful content” that was “a severe violation of Googles policies and might be illegal.” A “learn more” link led to a [list of possible reasons](https://support.google.com/accounts/answer/40695?hl=en), including “child sexual abuse & exploitation.”
Mark was confused at first but then remembered his sons infection. “Oh, God, Google probably thinks that was child porn,” he thought.
In an unusual twist, Mark had worked as a software engineer on a large technology companys automated tool for taking down video content flagged by users as problematic. He knew such systems often have a human in the loop to ensure that computers dont make a mistake, and he assumed his case would be cleared up as soon as it reached that person.
Image
![Mark, a software engineer who is currently a stay-at-home dad, assumed he would get his account back once he explained what happened. He didnt.](https://static01.nyt.com/images/2022/08/19/business/00Google-Photo-02/00Google-Photo-02-articleLarge.jpg?quality=75&auto=webp&disable=upscale)
Credit...Aaron Wojack for The New York Times
He filled out a form requesting a review of Googles decision, explaining his sons infection. At the same time, he discovered the domino effect of Googles rejection. Not only did he lose emails, contact information for friends and former colleagues, and documentation of his sons first years of life, his Google Fi account shut down, meaning he had to get a new phone number with another carrier. Without access to his old phone number and email address, he couldnt get the security codes he needed to sign in to other internet accounts, locking him out of much of his digital life.
“The more eggs you have in one basket, the more likely the basket is to break,” he said.
In a statement, Google said, “Child sexual abuse material is abhorrent and were committed to preventing the spread of it on our platforms.”
A few days after Mark filed the appeal, Google responded that it would not reinstate the account, with no further explanation.
Mark didnt know it, but Googles review team had also flagged a video he made and the San Francisco Police Department had already started to investigate him.
## How Google Flags Images
The day after Marks troubles started, the same scenario was playing out in Texas. A toddler in Houston had an infection in his “intimal parts,” wrote his father in [an online post](https://googlemessingupmylife.quora.com/Google-incorrectly-judged-my-case-On-February-22nd-2021-Google-disabled-my-account-saying-I-had-seriously-violated-th) that I stumbled upon while reporting out Marks story. At the pediatricians request, Cassio, who also asked to be identified only by his first name, used an Android to take photos, which were backed up automatically to Google Photos. He then sent them to his wife via Googles chat service.
Cassio was in the middle of buying a house, and signing countless digital documents, when his Gmail account was disabled. He asked his mortgage broker to switch his email address, which made the broker suspicious until Cassios real estate agent vouched for him.
“It was a headache,” Cassio said.
Images of children being exploited or sexually abused are [flagged by technology giants](https://www.nytimes.com/interactive/2019/09/28/us/child-sex-abuse.html) millions of times each year. In 2021, [Google alone](https://transparencyreport.google.com/child-sexual-abuse-material/reporting?lu=total_cybertipline_reports&total_cybertipline_reports=product:GOOGLE;period:2021H1) filed over 600,000 reports of child abuse material and disabled the accounts of over 270,000 users as a result. Marks and Cassios experiences were drops in a big bucket.
The tech industrys first tool to seriously disrupt the vast online exchange of so-called child pornography was PhotoDNA, a database of known images of abuse, converted into unique digital codes, or hashes; it could be used to quickly comb through large numbers of images to detect a match even if a photo had been altered in small ways. After Microsoft released PhotoDNA in 2009, Facebook and other tech companies used it to root out users circulating illegal and harmful imagery.
“Its a terrific tool,” the president of the National Center for Missing and Exploited Children said [at the time](https://archive.nytimes.com/bits.blogs.nytimes.com/2009/12/16/microsoft-tackles-the-child-pornography-problem/).
A bigger breakthrough came along almost a decade later, in 2018, when Google [developed](https://www.blog.google/around-the-globe/google-europe/using-ai-help-organizations-detect-and-report-child-sexual-abuse-material-online/) an artificially intelligent tool that could recognize never-before-seen exploitative images of children. That meant finding not just known images of abused children but images of unknown victims who could potentially be rescued by the authorities. Google made [its technology](https://protectingchildren.google/#tools-to-fight-csam) available to other companies, including [Facebook](https://about.fb.com/news/2021/02/preventing-child-exploitation-on-our-apps/).
When Marks and Cassios photos were automatically uploaded from their phones to Googles servers, this technology flagged them. Jon Callas of the E.F.F. called the scanning intrusive, saying a family photo album on someones personal device should be a “private sphere.” (A Google spokeswoman said the company scans only when an “affirmative action” is taken by a user; that includes when the users phone backs up photos to the companys cloud.)
“This is precisely the nightmare that we are all concerned about,” Mr. Callas said. “Theyre going to scan my family album, and then Im going to get into trouble.”
A human content moderator for Google would have reviewed the photos after they were flagged by the artificial intelligence to confirm they met the federal definition of child sexual abuse material. When Google makes such a discovery, it locks the users account, searches for other exploitative material and, as required by [federal law](https://uscode.house.gov/view.xhtml?req=granuleid:USC-prelim-title18-section2258A&num=0&edition=prelim), makes a report to the CyberTipline at the National Center for Missing and Exploited Children.
The nonprofit organization has become the clearinghouse for abuse material; it received 29.3 million reports last year, or about 80,000 reports a day. Fallon McNulty, who manages the CyberTipline, said most of these are previously reported images, which remain in steady circulation on the internet. So her staff of 40 analysts focuses on potential new victims, so they can prioritize those cases for law enforcement.
“Generally, if NCMEC staff review a CyberTipline report and it includes exploitative material that hasnt been seen before, they will escalate,” Ms. McNulty said. “That may be a child who hasnt yet been identified or safeguarded and isnt out of harms way.”
Ms. McNulty said Googles astonishing ability to spot these images so her organization could report them to police for further investigation was “an example of the system working as it should.”
CyberTipline staff members add any new abusive images to the hashed database that is shared with technology companies for scanning purposes. When Marks wife learned this, she deleted the photos Mark had taken of their son from her iPhone, for fear Apple might flag her account. Apple [announced plans last year](https://www.nytimes.com/2021/08/05/technology/apple-iphones-privacy.html) to scan the iCloud for known sexually abusive depictions of children, but the rollout was [delayed](https://www.theverge.com/2021/12/15/22837631/apple-csam-detection-child-safety-feature-webpage-removal-delay) indefinitely after resistance from privacy groups.
In 2021, the CyberTipline [reported](https://www.missingkids.org/gethelpnow/cybertipline/cybertiplinedata) that it had alerted authorities to “over 4,260 potential new child victims.” The sons of Mark and Cassio were counted among them.
## No Crime Occurred
Image
Credit...Aaron Wojack for The New York Times
In December 2021, Mark received a manila envelope in the mail from the San Francisco Police Department. It contained a letter informing him that he had been investigated as well as copies of the search warrants served on Google and his internet service provider. An investigator, whose contact information was provided, had asked for everything in Marks Google account: his internet searches, his location history, his messages and any document, photo and video hed stored with the company.
The search, related to “child exploitation videos,” had taken place in February, within a week of his taking the photos of his son.
Mark called the investigator, Nicholas Hillard, who said the case was closed. Mr. Hillard had tried to get in touch with Mark but his phone number and email address hadnt worked.
“I determined that the incident did not meet the elements of a crime and that no crime occurred,” Mr. Hillard wrote in his report. The police had access to all the information Google had on Mark and decided it did not constitute child abuse or exploitation.
Mark asked if Mr. Hillard could tell Google that he was innocent so he could get his account back.
“You have to talk to Google,” Mr. Hillard said, according to Mark. “Theres nothing I can do.”
Mark appealed his case to Google again, providing the police report, but to no avail. After getting a notice two months ago that his account was being permanently deleted, Mark spoke with a lawyer about suing Google and how much it might cost.
“I decided it was probably not worth $7,000,” he said.
Kate Klonick, a law professor at St. Johns University who has written about [online content moderation](https://harvardlawreview.org/wp-content/uploads/2018/04/1598-1670_Online.pdf), said it can be challenging to “account for things that are invisible in a photo, like the behavior of the people sharing an image or the intentions of the person taking it.” False positives, where people are erroneously flagged, are [inevitable](https://www.linkedin.com/help/linkedin/answer/136405) given the billions of images being scanned. While most people would probably consider that trade-off worthwhile, given the benefit of identifying abused children, Ms. Klonick said companies need a “robust process” for clearing and reinstating innocent people who are mistakenly flagged.
“This would be problematic if it were just a case of content moderation and censorship,” Ms. Klonick said. “But this is doubly dangerous in that it also results in someone being reported to law enforcement.”
It could have been worse, she said, with a parent potentially losing custody of a child. “You could imagine how this might escalate,” Ms. Klonick said.
Cassio was also investigated by the police. A detective from the Houston Police department called in the fall of 2021, asking him to come into the station.
After Cassio showed the detective his communications with the pediatrician, he was quickly cleared. But he, too, was unable to get his decade-old Google account back, despite being a paying user of Googles web services. He now uses a Hotmail address for email, which people mock him for, and makes multiple backups of his data.
## You Dont Necessarily Know It When You See It
Image
Credit...Aaron Wojack for The New York Times
Not all photos of naked children are pornographic, exploitative or abusive. [Carissa Byrne Hessick](https://law.unc.edu/people/carissa-byrne-hessick/), a law professor at the University of North Carolina who writes about child pornography crimes, said that legally defining what constitutes sexually abusive imagery can be complicated.
But Ms. Hessick said she agreed with the police that medical images did not qualify. “Theres no abuse of the child,” she said. “Its taken for nonsexual reasons.”
In machine learning, a computer program is trained by being fed “right” and “wrong” information until it can distinguish between the two. To avoid flagging photos of babies in the bath or children running unclothed through sprinklers, Googles A.I. for recognizing abuse was trained both with images of potentially illegal material found by Google in user accounts in the past and with images that were not indicative of abuse, to give it a more precise understanding of what to flag.
I have seen the photos that Mark took of his son. The decision to flag them was understandable: They are explicit photos of a childs genitalia. But the context matters: They were taken by a parent worried about a sick child.
“We do recognize that in an age of telemedicine and particularly Covid, it has been necessary for parents to take photos of their children in order to get a diagnosis,” said Claire Lilley, Googles head of child safety operations. The company has consulted pediatricians, she said, so that its human reviewers understand possible conditions that might appear in photographs taken for medical reasons.
Dr. Suzanne Haney, chair of the American Academy of Pediatrics Council on Child Abuse and Neglect, advised parents against taking photos of their childrens genitals, even when directed by a doctor.
“The last thing you want is for a child to get comfortable with someone photographing their genitalia,” Dr. Haney said. “If you absolutely have to, avoid uploading to the cloud and delete them immediately.”
She said most physicians were probably unaware of the risks in asking parents to take such photos.
“I applaud Google for what theyre doing,” Dr. Haney said of the companys efforts to combat abuse. “We do have a horrible problem. Unfortunately, it got tied up with parents trying to do right by their kids.”
Cassio was told by a customer support representative earlier this year that sending the pictures to his wife using Google Hangouts violated the chat services [terms of service](https://support.google.com/hangouts/answer/9334169?hl=en). “Do not use Hangouts in any way that exploits children,” the terms read. “Google has a zero-tolerance policy against this content.”
As for Mark, Ms. Lilley, at Google, said that reviewers had not detected a rash or redness in the photos he took and that the subsequent review of his account turned up a video from six months earlier that Google also considered problematic, of a young child lying in bed with an unclothed woman.
Mark did not remember this video and no longer had access to it, but he said it sounded like a private moment he would have been inspired to capture, not realizing it would ever be viewed or judged by anyone else.
“I can imagine it. We woke up one morning. It was a beautiful day with my wife and son and I wanted to record the moment,” Mark said. “If only we slept with pajamas on, this all could have been avoided.”
A Google spokeswoman said the company stands by its decisions, even though law enforcement cleared the two men.
## Guilty by Default
Ms. Hessick, the law professor, said the cooperation the technology companies provide to law enforcement to address and root out child sexual abuse is “incredibly important,” but she thought it should allow for corrections.
“From Googles perspective, its easier to just deny these people the use of their services,” she speculated. Otherwise, the company would have to resolve more difficult questions about “whats appropriate behavior with kids and then whats appropriate to photograph or not.”
Mark still has hope that he can get his information back. The San Francisco police have the contents of his Google account preserved on a thumb drive. Mark is now trying to get a copy. A police spokesman said the department is eager to help him.
Nico Grant contributed reporting. Susan Beachy contributed research.
 
 
---
`$= dv.el('center', 'Source: ' + dv.current().Link + ', ' + dv.current().Date.toLocaleString("fr-FR"))`