Archive for the ‘Fourth Amendment’ Category

Augmented Reality Must Have Augmented Privacy – EFF

Imagine walking down the street, looking for a good cup of coffee. In the distance, a storefront glows in green through your smart glasses, indicating a well-reviewed cafe with a sterling public health score. You follow the holographic arrows to the crosswalk, as your wearables silently signal the self-driving cars to be sure they stop for your right of way. In the crowd ahead you recognize someone, but cant quite place them. A query and response later, Cameron pops above their head, along with the context needed to remember they were a classmate from university. You greet them, each of you glad to avoid the awkwardness of not recalling an acquaintance.

This is the stuff of science fiction, sometimes utopian, but often as a warning against a dystopia. Lurking in every gadget that can enhance your life is a danger to privacy and security. In either case, augmented reality is coming closer to being an everyday reality.

In 2013, Google Glass stirred a backlash, but the promise of augmented reality bringing 3D models and computer interfaces into the physical world (while recording everything in the process) is re-emerging. As is the public outcry over privacy and always-on recording. In the last seven years, companies are still pushing for augmented reality glasseswhich will display digital images and data that people can view through their glasses. Chinese company Nreal, Facebook and Apple are experimenting with similar technology.

Digitizing the World in 3D

Several technologies are moving to create a live map of different parts of our world, from Augmented or Virtual Reality to autonomous vehicles. They are creating machine-readable, 1:1 scale models of the world that are continuously updated in real-time. Some implement such models through point clouds, a dataset of points coming from a scanner to recreate the surfaces (not the interior) of objects or a space. Each point has three coordinates to position them in space. To make sense of the millions (or billions) of points, a software with Machine Learning can help recognize the objects from the point cloudslooking exactly as a digital replica of the world or a map of your house and everything inside.

The promise of creating a persistence 3D digital clone of the world aligned with real-world coordinates goes by many names: worlds digital twin, parallel digital universe, Mirrorworld, The Spatial Web, Magic Verse'' or a Metaverse. Whatever you call it, this new parallel digital world will introduce a new world of privacy concernseven for those who choose to never wear it. For instance, Facebook Live Maps will seek to create a shared virtual map. LiveMaps will rely on users crowd-sourced maps collected by future AR devices with client-mapping functionality. Open AR, an interoperable AR Cloud, and Microsofts Azure Digital Twins are seeking to model and create a digital representation of an environment.

Facebooks Project Aria continues on that trend and will aid Facebook in recording live 3D maps and developing AI models for Facebooks first generation of wearable augmented reality devices. Arias uniqueness, in contrast to autonomous cars, is the egocentric data collection of the environmentthe recording data will come from the wearers perspective; a more intimate type of data. Project Aria is also a 3D live-mapping tool and software with an AI development tool, not a prototype of a product, nor an AR device due to the lack of display." According to Facebook, Arias research glasses, which are not for sale, will be worn only by trained Facebook staffers and contractors to collect data from the wearers point of view. For example, if the AR wearer records a building and the building later burns down, the next time any AR wearer walks by, the device can detect the change, and update the 3D map in real-time.

A Portal to Augmented Privacy Threats

In terms of sensors, Arias will include among others a magnetometer, a barometer, GPS chip, and two inertial measurement units (IMU). Together, these sensors will track where the wearer is (location), where the wearer is moving (motion), and what the wearer is looking at (orientation)a much more precise way to locate the wearers location. While GPS doesnt often work inside a building, for example, sophisticated IMU can allow a GPS receiver to work well indoors when GPS-signals are unavailable.

A machine learning algorithm will build a model of the environment, based on all the input data collected by the hardware, to recognize precise objects and 3D map your space and the things on it. It can estimate distances, for instance, how far the wearer is from an object. It also can identify the wearers context and activities: Are you reading a book? Your device might then offer you a reading recommendation.

The Bystanders Right to Private Life

Imagine a future where anyone you see wearing glasses could be recording your conversations with always on microphones and cameras, updating the map of where you are in precise detail and real-time. In this dystopia, the possibility of being recorded looms over every walk in the park, every conversation in a bar, and indeed, everything you do near other people.

During Arias research phase, Facebook will be recording its own contractors interaction with the world. It is taking certain precautions. It asks the owners concerns before recording in privately owned venues such as a bar or restaurant. It avoids sensitive areas, like restrooms and protests. It blurs peoples faces and license plates. Yet, there are still many other ways to identify individuals, from tattoos to peoples gait, and these should be obfuscated, too.

These blurring protections mirror those used by other public mapping mechanisms like Google Street View. These have proven reasonablebut far from infalliblein safeguarding bystanders privacy. Google Street View also benefits from focusing on objects, which only need occasional recording. Its unclear if these protections remain adequate for perpetual crowd-sourced recordings, which focus on human interactions. Once Facebook and other AR companies release their first generation of AR devices, it will likely take concerted efforts by civil society to keep obfuscation techniques like blurring in commercial products. We hope those products do not layer robust identification technologies, such as facial recognition, on top of the existing AR interface.

The AR Panopticon

If the AR glasses with always-on audio-cameras or powerful 3D mapping sensors become massively adopted, the scope and scale of the problem changes as well. Now the company behind any AR system could have a live audio/visual window into all corners of the world, with the ability to locate and identify anyone at any time, especially if facial or other recognition technologies are included in the package. The result? A global panopticon society of constant surveillance in public or semi-public spaces.

In modern times, the panopticon has become a metaphor for a dystopian surveillance state, where the government has cameras observing your every action. Worse, you never know if you are a target, as law enforcement looks to new technology to deepen their already rich ability to surveil our lives.

Legal Protection Against Panopticon

To fight back against this dystopia, and especially government access to this panopticon, our first line of defense in the United States is the Constitution. Around the world, we all enjoy the protection of international human rights law. Last week, we explained how police need to come back with a warrant before conducting a search of virtual representations of your private spaces. While AR measuring and modeling in public and semi-public spaces is different from private spaces, key Constitutional and international human rights principles still provide significant legal protection against police access.

In Carpenter v. United States, the U.S. Supreme Court recognized the privacy challenges with understanding the risks of new technologies, warning courts to tread carefully to ensure that we do not embarrass the future.

To not embarrass the future, we must recognize that throughout history people have enjoyed effective anonymity and privacy when conducting activities in public or semi-public spaces. As the United Nations' Free Speech Rapporteur made clear, anonymity is a common human desire to protect ones identity from the crowd..." Likewise, the Council of Europe has recognized that while any person moving in public areas may expect a lesser degree of privacy, they do not and should not expect to be deprived of their rights and freedoms including those related to their own private sphere. Similarly, the European Court of Human Rights, has recognized that a zone of interaction of a person with others, even in a public context, may fall within the scope of private life. Even in public places, the systematic or permanent recording and the subsequent processing of images could raise questions affecting the private life of individuals. Over forty years ago, in Katz v. United States, the U.S. Supreme Court also recognized "what [one] seeks to preserve as private, even in an area accessible to the public, may be constitutionally protected."

This makes sense because the natural limits of human memory make it difficult to remember details about people we encounter in the street; which effectively offers us some level of privacy and anonymity in public spaces. Electronic devices, however, can remember perfectly, and collect these memories in a centralized database to be potentially used by corporate and state actors. Already this sense of privacy has been eroded by public camera networks, ubiquitous cellphone cameras, license plate readers, and RFID trackersrequiring legal protections. Indeed, the European Court of Human Rights requires clear detailed rules..., especially as the technology available for use [is] continually becoming more sophisticated.

If smartglasses become as common as smartphones, we risk losing even more of the privacy of crowds. Far more thorough records of our sensitive public actions, including going to a political rally or protest, or even going to a church or a doctors office, can go down on your permanent record.

This technological problem was brought to the modern era in United States v. Jones, where the Supreme Court held that GPS tracking of a vehicle was a search, subject to the protection of the Fourth Amendment. Jones was a convoluted decision, with three separate opinions supporting this result. But within the three were five Justices a majority who ruled that prolonged GPS tracking violated Jones reasonable expectation of privacy, despite Jones driving in public where a police officer could have followed him in a car. Justice Alito explained the difference, in his concurring opinion (joined by Justices Ginsburg, Breyer, and Kagan):

In the pre-computer age, the greatest protections of privacy were neither constitutional nor statutory, but practical. Traditional surveillance for any extended period of time was difficult and costly and therefore rarely undertaken. Only an investigation of unusual importance could have justified such an expenditure of law enforcement resources. Devices like the one used in the present case, however, make long-term monitoring relatively easy and cheap.

The Jones analysis recognizes that police use of automated surveillance technology to systematically track our movements in public places upsets the balance of power protected by the Constitution and violates the societal norms of privacy that are fundamental to human society.

In Carpenter, the Supreme Court extended Jones to tracking peoples movement through cell-site location information (CSLI). Carpenter recognized that when the Government tracks the location of a cell phone it achieves near perfect surveillance as if it had attached an ankle monitor to the phone's user. The Court rejected the governments argument that under the troubling third-party doctrine, Mr. Carpenter had no reasonable expectation of privacy in his CSLI because he had already disclosed it to a third party, namely, his phone service provider.

AR is Even More Privacy Invasive Than GPS and CSLI

Like GPS devices and CSLI, AR devices are an automated technology that systematically documents what we are doing. So AR triggers strong Fourth Amendment Protection. Of course, ubiquitous AR devices will provide even more perfect surveillance, compared to GPS and CSLI, not only tracking the users information, but gaining a telling window into the lives of all the bystanders around the user.

With enough smart glasses in a location, one could create a virtual time machine to revisit that exact moment in time and space. This is the very thing that concerned the Carpenter court:

the Government can now travel back in time to retrace a person's whereabouts, subject only to the retention policies of the wireless carriers, which currently maintain records for up to five years. Critically, because location information is continually logged for all of the 400 million devices in the United States not just those belonging to persons who might happen to come under investigation this newfound tracking capacity runs against everyone.

Likewise, the Special Rapporteur on the Protection of Human Rights explained that a collect-it-all approach is incompatible with the right to privacy:

Shortly put, it is incompatible with existing concepts of privacy for States to collect all communications or metadata all the time indiscriminately. The very essence of the right to the privacy of communication is that infringements must be exceptional, and justified on a case-by-case basis.

AR is location tracking on steroids. AR can be enhanced by overlays such as facial recognition, transforming smartglasses into a powerful identification tool capable of providing a rich and instantaneous profile of any random person on the street, to the wearer, to a massive database, and to any corporate or government agent (or data thief) who can access that database. With additional emerging and unproven visual analytics (everything from aggression analysis to lie detection based on facial expressions is being proposed), this technology poses a truly staggering threat of surveillance and bias.

Thus, the need for such legal safeguards, as required in Canada v. European Union, are all the greater where personal data is subject to automated processing. Those considerations apply particularly where the protection of the particular category of personal data that is sensitive data is at stake.

Augmented reality will expose our public, social, and inner lives in a way that maybe even more invasive than the smartphones revealing montage of the user's life that the Supreme Court protected in Riley v California. Thus it is critical for courts, legislators, and executive officers to recognize that the government cannot access the records generated by AR without a warrant.

Corporations Can Invade AR Privacy, Too

Even more, must be done to protect against a descent into AR dystopia. Manufacturers and service providers must resist the urge, all too common in Silicon Valley, to collect it all, in case the data may be useful later. Instead, the less data companies collect and store now, the less data the government can seize later.

This is why tech companies should not only protect their users right to privacy against government surveillance but also their users right to data protection. Companies must, therefore, collect, use, and share their users AR data only as minimally necessary to provide the specific service their users asked for. Companies should also limit the amount of data transited to the cloud, and the period it is retained, while investing in robust security and strong encryption, with user-held keys, to give user control over information collected. Moreover, we need strong transparency policies, explicitly stating the purposes for and means of data processing, and allowing users to securely access and port their data.

Likewise, legislatures should look to the augmented reality future, and augment our protections against government and corporate overreach. Congress passed the Wiretap Act to give extra protection for phone calls in 1968, and expanded statutory protections to email and subscriber records in 1986 with the Electronic Communication Privacy Act. Many jurisdictions have eavesdropping laws that require all-party consent before recording a conversation. Likewise, hidden cameras and paparazzi laws can limit taking photographs and recording videos, even in places open to the public, though they are generally silent on the advanced surveillance possible with technologies like spatial mapping. Modernization of these statutory privacy safeguards, with new laws like CalECPA, has taken a long time and remains incomplete.

Through strong policy, robust transparency, wise courts, modernized statutes, and privacy-by-design engineering, we can and must have augmented reality with augmented privacy. The future is tomorrow, so lets make it a future we would want to live in.

Read more:
Augmented Reality Must Have Augmented Privacy - EFF

Don’t discount the majority of your state: Reaching rural Southern voters – scalawagmagazine.org

Scalawag created the As the South Votes project in part as a resource for rural Southern voters whose stories often go uncoveredor are flat out misrepresented by national media outlets. Stereotypes of rural voters as those who vote against their own interests fail to see the structural ways in which rural communities are discounted and intentionally discouraged from voting. At a recent virtual town hall, Anoa Changa sat down with three representatives from advocacy groups across the South where they discussed how to ethically empower rural voters this election.

If we had enough voting power in our metro areas, we would already be living in the kind of state we want to live in.

Accessibility of information applies to material barriers, too. Advocacy groups across the South are investing more in signage like billboards and other "offline" media to get actionable information into the hands of people in communities without technological infrastructure.

"When I'm thinking about voter IDs, and the limitations of folks who live in rural communities outside of Jackson, I'm thinking about [] places where you don't have a mailbox, you have a PO box, and the PO box is in town, and you have to drive to town to get there," Bennett said. "People don't have public transportation, people don't have cars. So we're talking about all of these different economic restrictions and mobility issues around like how people can even access the thing in the first place."

Adding to the accessibility headache, mainstream media also often mischaracterizes or fails to accurately represent the real concerns of people in rural areas, feeding into the kind of general distrust that Benavidez cautioned about.

This predisposition and valid hesitance to outsiders makes deep partnerships and relationship-building even more crucial for organizers trying to win trust in areas where others have historically not made appropriate efforts.

"When I go to do this work in those rural communities, I of course go with deference and respectbut also an understanding that we have established groups and organizations that do the work," Khondoker said.

It's those strategic partnershipsand the followthrough on themthat can make or break effective mobilization.

"I don't make promises I can't keep. If I tell them I want to show up, I show up," Shelton said. "You know, if we say we're going to meet on a Wednesday at five o'clock and it's raining, we will still be there Wednesday, even if it's raining. I think that consistency and showing up in those rural communities really makes a huge difference."

That mobilization is key in harnessing the power of would-be voters to swing entire states.

"One of the things that I always remind folks in South Louisiana is that if it would be enough for us to have the power in South Louisiana, we'd already be living in the state we want to live in, because it'd be fixed," Shelton said. "But the reality is that you've got to have the whole state engaged. And those voters and those voices in those rural parishes are really critical to how we build and get to a stronger state, and how we build voice and power."

Arekia Bennett has organized and empowered youth across Mississippi over the last 10 years. She serves as the Executive Director of Mississippi Votes, a statewide, millennial led, civic engagement non-profit organization that has engaged over 500,000 young people across the state. Since 2018, Mississippi Votes has registered around 15,000 new voters, many of them between the ages of 18 and 39, and many of whom were formerly incarcerated and or in prison. Mississippi Votes also advocates for policies to expand voting access in Mississippi.

Aklima Khondoker is All Voting is Local's Georgia State Director. Prior to joining the campaign, Khondoker worked as a staff attorney and the senior manager for the Voting Access Project at the ACLU of Georgia, where she focused on first and fourth amendment issues, women's reproductive freedoms, and voting rights. Her voting rights work in Georgia includes both litigation and advocacy. She's been involved in the development and execution of voting rights strategy that has included crafting policy and regulatory proposals, partnership development, monitoring local election boards, and successfully advocating for more voting sites.

Ashley Shelton is the Executive Director of the Power Coalition, a statewide 501c3 table in Louisiana. The Power Coalition uses a broad-based strategy that combines community organizing, issue advocacy, and civic action all while increasing the capacity of community organizations throughout the state to sustain and hold the work. Their integrated voter engagement approach has changed policy at the municipal and state level, and moved infrequent voters of color to vote at higher levels.

Originally posted here:
Don't discount the majority of your state: Reaching rural Southern voters - scalawagmagazine.org

The alarming question behind Barr’s ‘unmasking’ probe – KTVZ

Attorney General William Barrs personally ordered an investigation into the Obama administrations handling of intelligence reports dubbed OBAMAGATE! by President Trump apparently ended with a whimper Tuesday. US Attorney John Bash concluded that no substantive wrongdoing took place, people familiar with the matter told The Washington Post.

The implications of this finding are significant, and not good for President Donald Trump.

The crux of this investigation involves masking and unmasking, terms used by the intelligence community to refer to shielding (or revealing) the names of people mentioned in an intelligence report who were not the targets of the investigation.

For example, if the National Security Agency intercepts a conversation between a foreign intelligence target and someone in the United States, a report on that conversation disseminated outside the agency would refer to the American as U.S. PERSON 1, not by name. This process protects the Fourth Amendment rights of someone who is incidentally (though still legally) caught up in our intelligence gathering efforts on foreign targets.

For various reasons, people who receive such an intelligence report may need to know who the masked person is. It might help another intelligence agency connect the dots or add a puzzle piece in another investigation.

For high-level government officials, like diplomats, unmasking may be directly relevant to their work. In instances like these, officials can ask the agency that collected the intelligence to reveal to them the identity of a masked person, as long as they are authorized to receive that information and their request meets at least one of seven criteria set by the NSA. The criteria include, for example, that the intelligence indicates the US person may be involved in a crime, or is being targeted by foreign intelligence, among other reasons.

So, lets turn to Barrs unmasking probe.

Last May, Richard Grenell, then Trumps acting director of national intelligence, released a list of Obama administration officials who had made unmasking requests to the NSA between November 2016 and January 2017, where the unmasked individual was ultimately identified as former Trump national security adviser Gen. Michael Flynn. The allegation being investigated was whether these requests were intended to sabotage the Trump campaign by unfairly targeting Flynn.

The investigation also seemed to link this allegation with the later FBI investigation of Flynn for making false statements regarding his secret call to then-Russian Ambassador Sergey Kislyak, charges that Barrs Justice Department is currently trying to have dropped.

The basis of this unmasking investigation was inherently flawed from the beginning. For starters, as explained above, the very purpose of an unmasking request is to learn the name of a person who is unidentified. The unmasking process by definition cannot be used to target a specific person, since the requesting official would have no idea who the person is until the request is granted and their identity is unveiled.

Further, the NSA memo providing the list of officials who made unmasking requests included an important note: Each individual was an authorized recipient of the original report and the unmasking was approved through NSAs standard process, which includes a review of the justification for the request. That means that each unmasking request met at least one of the seven justifications required by NSA, and that Flynns identity was necessary to fully understand the intelligence or its importance. Barrs handpicked Justice Department investigator apparently agrees.

Which is why this is bad news for Trump. Ultimately, 39 different government and intelligence officials found the communications or activities that Flynn was participating in so alarming that they each separately and independently made unmasking requests to the NSA. They all did this before even knowing it was someone involved in Trumps campaign.

These officials ranged from the US NATO defense adviser to the then-US ambassador to Russia. In addition, many of these requests were made in mid-December 2016 suggesting that the same few reports were raising alarm bells across the government and intelligence community.

Worst of all for Trump, the alarming activity by Flynn reported by the NSA has nothing to do with the FBIs case against Flynn for lying about his secret call to Kislyak. We know this for two reasons. First, that call took place on December 29, well after the majority of the unmasking requests involving Flynn took place. Second, that intelligence would not have been collected by the NSA in the first place. Only the FBI can conduct electronic surveillance of foreign targets like Kislyak that are located inside the US.

So whatever Flynn was up to cannot be dismissed as part of the Deep State coup which Trump has repeatedly claimed the FBI was waging against him.

Ultimately, Barrs unmasking investigation answers the question of whether Obama administration officials made legitimate and justified unmasking requests. They did. But it raises a new one: What, exactly, was Flynn up to in December 2016, while he was part of Trumps transition team, that caused these officials to be so concerned? Thats an investigation worth pursuing.

Read the original:
The alarming question behind Barr's 'unmasking' probe - KTVZ

Minneapolis Will Consider Facial Recognition Ban – VICE

A Minneapolis City Council member filed a motion that could result in a citywide ban on law enforcement use of facial recognition technology.

If successful, the motion, which was filed on October 2 and will be officially introduced Friday, could signal a wave of reforms over the use of military and surveillance equipment following the murder of George Floyd by Minneapolis police.

As calls to defund and disband police forces reverberate across the United States, a coalition has formed in Minnesota to reign in intrusive surveillance technology and establish democratic controls over policing. The POSTME coalition, which stands for Public Oversight of Surveillance Technology and Military Equipment, lobbied for a ban on the use of facial recognition by police in Minneapolis, among other reforms. In regular meetings with stakeholders including elected officials, organizers from around the nation and administrative staff within the city, the group has drafted legislation and educated those in government why these changes are vital. Their first target is a ban on the use of facial recognition by the Minneapolis Police Department which is seen as a strong first step towards restoring the Fourth Amendment protections that have lapsed in recent years.

Munira Mohamed, an organizer with the coalition explained, one of the most insidious aspects of facial recognition technologies is how widespread and indiscriminate it can be, which is made even more horrifying when you learn how absolutely unreliable and inaccurate it can be. The amount of racial bias shown in this technology is stunning and it has incredibly painful consequences for those falsely identified.

Numerousreports and studies have shown just how wildly inaccurate facial recognition systems can be. The Detroit Chief of Police said that the forces software misidentifies 96 percent of the time, and public records from the force showed that it was almost exclusively used on Black people in 2019. The use of facial recognition has been linked to at least two false arrests in Detroit, and the lawyers of one falsely arrested man believe there are many more victims.

In Minnesota, alarming investigative reports have raised concerns that law enforcement was rapidly adopting facial recognition and deliberately hiding their actions from the public. Law enforcement emails obtained by public records request stated, this is not an application that I want advertised to anyone other than sheriff's office employees." This tendency to keep technology secret and deploy it against the public without any public oversight is the norm not the exception in Minnesota. This undemocratic process has been the case with the vast majority if not all surveillance technologies being used in the state.

Emun Solomon is also an organizer with the coalition who said, the goals of POSTME are really divided into two buckets for me. The first is to establish a future proofing. The second is to build a friction-less or near frictionless system to maintain community engagement in holding police accountable. This also extends to other cities as well as the state of Minnesota.

The coalition includes members of the American Civil Liberties Union, Council on American Islamic Relations, Restore the Fourth, Communities United Against Police Brutality among other organizations. They met earlier this year for the first time, prior to the murder of George Floyd by Minneapolis police, and have decided for the time being to focus their efforts on Minneapolis City Council and expand from there. From the coalitions website, Police are increasingly using surveillance technology and military equipment to further entrench racial bias into the criminal justice system, secretly invade civilian privacy, and wrongfully arrest innocent people.

POSTME is all about bringing the values of democracy to the sphere of police powers. Not only do ordinary citizens not have a say, the police are often systematically shielded from any kind of public regulation or control and largely operate in the dark. The POSTME coalition is determined to bring oversight and accountability to surveillance and military technologies, Mohamed explained.

She notes that these reforms also expand democracy itself, Not only do we want laws and structures in place that create greater transparency but we want community control of that process. Surveillance technology is oftentimes implemented in complete secrecy and only after it has been in use for some time does its existence in a local community become apparent. The POSTME coalition wants to change that dynamic by plugging the community into conversations before new technology or military equipment is acquired by police.

Now that the facial recognition ban is beginning to advance at city hall, the POSTME coalition aims to increase community involvement in these discussions and grow the political capital to further these goals. The group is planning a virtual town hall with Minneapolis Councilmember Steve Fletcher to discuss this potential ban and their next steps on October 22nd.

Continued here:
Minneapolis Will Consider Facial Recognition Ban - VICE

Google gives IP addresses to police of people who have searched particular keywords or addresses – Privacy News Online

According to court records from an arson case in Florida, Google regularly provides information to law enforcement about people that search a particular term or physical location using a Google service like Google Search or Google Maps. Information such as the the IP address. Typically, if police have an interest in requesting the search history of one particular suspect, they have to get a warrant. What they do now instead is request a list of information on all those that searched a particular keyword in a particular timeframe and use that to build a list of suspects by corresponding IP addresses to real identities. The barrier that faces the latter type of warrant is much more than the former. Here, Google was asked for:

users who had searched the address of the residence close in time to the arson.

In this particular arson case, the police got the IP address of the suspect from the wide net data request from Google, then associated that IP address with a particular phone number belonging to the suspect. Police were then able to use that known phone number to get location data records from the phone providers cell towers which corresponded to the arson location. They didnt get the phones location data or GPS coordinates as reported by the phone, they got the location data as reported via cell tower data.

On Googles end, they state that these warrants are complied with but the company always pushes for narrower requirements to protect the privacy of more users. The exact contents of the warrant are still sealed, so we dont know for sure just how many people had their privacy violated. Googles Director of Law Enforcement and Information Security, Richard Salgado, stated:

We require a warrant and push to narrow the scope of these particular demands when overly broad, including by objecting in court when appropriate. These data demands represent less than 1% of total warrants and a small fraction of the overall legal demands for user data that we currently receive.

No matter how narrow Google can get a warrant to be, the fact that innocent people can be scooped up in dragnet surveillance techniques like this is heinous. The fact that these warrants are even allowed to be granted is the real issue at hand. The fact of the matter is, keyword warrants violate the Fourth Amendment of the US Constitution. Albert Fox Cahn, executive director of the Surveillance Technology Oversight Project, told CNET:

This keyword warrant evades the Fourth Amendment checks on police surveillance. When a court authorizes a data dump of every person who searched for a specific term or address, its likely unconstitutional.

Keyword warrants are very similar to reverse location warrants or geo-fencing warrants which law enforcement use to get lists of people that were within a specific geographic area during the time of a crime. The thing is, a federal court has ruled that reverse location warrants violate the Fourth Amendment. Police are switching their tactics to deal with that hiccup by falling back to their tried and true method of keyword warrants. Google declined to comment to CNET about how many keyword warrants theyve received in the last three years. How about the last twenty years?

Read the original here:
Google gives IP addresses to police of people who have searched particular keywords or addresses - Privacy News Online