Archive for the ‘Artificial Intelligence’ Category

Trepper: The Anti-Bigotry App That Uses Artificial Intelligence to Identify White Nationalists – Capital Research Center

A nonprofit organization claims to have created a phone app that uses artificial intelligence to identify people the group believes to be known white nationalists. Of course, the organization considers people and organizations to be white nationalists even if theyre just regular conservativesand even if theyre not white.

IREHR Sees White Nationalists Everywhere

In February, the Kansas City Star ran a widely circulated hit piece on America First Students, a new student organization at Kansas State University. The smear campaign was based on research by the Institute for Education and Research on Human Rights (IREHR). Despite being a little-known and little-funded organization, IREHR regularly publishes large volumes of research and has even created a bigotry tracking app that promises to use artificial intelligence (AI) and machine learning to out known white nationalists.

Founded in 1983, IREHRs vision, according to its founder Leonard Zeskind, is to fight against what it sees as white nationalism. Zeskind claims that white nationalism is in a symbiotic relationship with mainstream conservatism, such as the Christian right and paleoconservativism. In 2011, Zeskind wrote about IREHRs Re-Birth in the Huffington Post, explicitly attacking the Tea Party movement for its racism in his second sentence. Tea Party Nationalism even has its own section on the IREHR website, right under Race, Racism & White Nationalism. In 2018, years after the Tea Party movement was at its peak, IREHR ran a piece titled Guns and Racism: From the National Rifle Association to Far Right Militias and the Tea Party. The Tea Party movement is such a target of IREHRs ire thatZeskind, along with IREHR president Devin Burghart, wrote a report called Tea Party Nationalism, commissioned by the NAACP. The NAACP also created a companion website: TeaPartyNationalism.com.

In addition to going after the Tea Party movement and the National Rifle Association, IREHR also targets the American Conservative Unions Political Action Conference (CPAC). In 2014, Burghart published The Unbearable Whiteness of CPAC.

Given how IREHR sees white nationalism throughout mainstream conservatism and considers whiteness as something unbearable, its phone application that promises to identify known white nationalists via artificial intelligence sounds incredibly Orwellian.

The Trepper App

Burghart announced the creation of IREHRs anti-bigotry app during a Holocaust Remembrance Day Speech in May 2019. The app is named after Leopold Trepper, the head of a Soviet anti-Nazi spy ring during WWII known as the Red Orchestra. Trepper was later imprisoned by Stalin, reportedly because Trepperhimself Jewishsurrounded himself with Jews. (This detail was not included in IREHRs speech on Holocaust Remembrance Day; however, IREHRs founder and president still generally appreciates communist figures, even visiting the grave of Marxist philosopher Antonio Gramsci.)

During the speech, Burghart boasted that the app now allows us to use the latest in machine learning and artificial intelligence to see if people in the videos you submit are known white nationalists. Machine learning and artificial intelligence are often codewords for facial recognition software. It is unclear, however, how exactly machine learning and artificial intelligence are being used in the app.

Trepper: The Anti-Bigotry App promises to provide instant updates about new threats near you. According to the IREHR website, the app will allow users to receive push notifications of seemingly white nationalist activity, and it allows users to upload their own events, photos, and videos to the app.

Downloading Trepper, users are greeted with a page telling them what they can find on the app: a news tab, a Tools for Response and Resistance tab, and a way to Report incidents (see the screenshot below).

The Report Bigotry tab allows users to request help (a feature coming soon), record video of an ongoing live event, or write a report about an instance of bigotry they witnessed.

The reporting feature lets users provide details about a reportedly bigotry-related event, ranging from murder, the first item on the list, to something internet-based or other.

Much of the app is still under construction. The signs and symbols of bigotry section of the response toolkit is coming soon as is the help, FAQ, tips, etc. section. The section for users to find or create their own anti-bigotry groups loads a blank page. As of writing, the only video uploaded to the Trepper app is a video of a Patriot Prayer rally from 2018. Patriot Prayer is a Portland-based group that has been maligned by the Southern Poverty Law Center (SPLC) and targeted by Antifa activists.

Curiously, Patriot Prayer is included on the Trepper app even though its controversial founder, Joey Gibson, identifies as Japanese, not Caucasian, and has repeatedly condemned white supremacy. Apparently, to the IREHR app, a person doesnt even need to be white to be a white nationalist.

In fact, in the security section of the response toolkit, the Trepper app deviates from merely attacking white nationalism: It suggests ways to protect against the far right, whomever IREHR deems the far right to be. The security section of the app tells users to protect their offices and homes from possible far right infiltrators:

The app also tells users involved in their anti-racist activity to shred documents, handle potential hate mail with tongs, and circulate pictures online of people they suspect to be following them.

Who Is Funding IREHR?

A section of the IREHR app is devoted to learning more about IREHR, with the option to donate to IREHR. According to tax filings, the organization has not had gross receipts of more than $50,000 since 2008. It is unclear how such a small organization with so little funding can create such a complicated appwith the potential of harnessing facial recognition technology.

Garbage In, Garbage Out

IREHRs plans to implement what appears to be facial recognition technology to identify people accused of having the wrong political beliefs is terrifying. IREHR has a history of identifying people, all on its own, as white nationalists who are not white nationalists at all. IREHR already attacks mainstream conservative groups and mainstream conservative ideas, groups, and figures as white nationalist, or as being tied to white nationalism. It is a terrible precedent in general to use artificial intelligence to identify (and catalog) people because they have the wrong political beliefs.

Excerpt from:
Trepper: The Anti-Bigotry App That Uses Artificial Intelligence to Identify White Nationalists - Capital Research Center

Artificial intelligence won’t rule the world so long as humans rule AI – The Age

Four days later, the Vatican issued a paper calling for "new forms of regulation" of AI based on the principles of "transparency, inclusion, responsibility, impartiality, reliability, security and privacy".

Loading

The striking thing about both these pronouncements is the degree to which they align with the official line from Silicon Valley, which couches ethics as a set of voluntary principles that will guide, rather than direct, the development of AI.

By proposing broad principles, which are notoriously difficult to define legally, they avoid the guard rails or red lines that would give genuine oversight over the way this technology develops.

The other problem with these voluntary codes is they will always be in conflict with the key drivers of technological change: to make money (if you are a business) or save money (if you are a government).

But theres an alternative approach to harnessing technological change that warrants serious consideration. It is proposed by the Australian Human Rights Commission. Rather than woolly guiding principles, Commissioner Ed Santow argues that AI should be developed within three clear parameters.

Loading

First, it should comply with human rights law. Second, it should be used in ways that minimise harm. Finally, humans need to be accountable for the way AI is used. The difference with this approach is that it anchors AI development within the existing legal framework.

To legally operate in Australia, under this proposal, the development of artificial intelligence would need to ensure it did not discriminate on the grounds of gender, race or social demographic, either directly or in effect.

The AI proponents would also need to show they had thought through the impact of their technology, much like a property developer needs to conduct an environmental impact statement before building.

And critically, an AI tool should have a human a flesh-and-blood person who is responsible for its design and operation.

How would these principles work in practice? Its worth looking at the failed robodebt program, under which recipients of government benefits were sent letters demanding they repay money because they had been overpaid.

If it had been scrutinised before it went live, robodebt is likely to have been found discriminatory, as it shifted the onus of proof onto people from societys most marginalised groups to show their payments were valid.

If it had been subject to a public impact review, the glaring anomalies and inconsistencies in matching Australian Tax Office and social security information would have become apparent before it was trialled on vulnerable people. And if a human had been accountable for its operation, those who received a notice would have had a course of review, rather than feeling as though they were speaking to a machine.

The whole costly and destructive debacle might have been prevented.

Embracing a future where these "disruptive" technologies remake our society guided by voluntary ethical principles is not good enough. As Robert-Elliott Smith observes in his excellent book Rage Inside the Machine, the idea that AI is amoral is bunkum. The values and priorities of the humans who commission and design it will determine the end product.

Loading

This challenge will become more pressing as algorithms begin to process banks of photos and video that purport to "recognise" individuals, track their movements and predict their motivations. The Human Rights Commission report calls for a moratorium on the use of this technology in high-stakes areas such as policing. It seeks to protect citizens from "bad" applications, but also to provide an incentive for industry to support the development of an enforceable legal framework.

Champions of technology may well argue that government intervention will slow down development and risk Australia being "left behind". But if we succeed in ensuring AI is "fair by design", we might end up with a distinctly Australian technology, which reflects our values, to share with the world.

Peter Lewis is the director of the Centre for Responsible Technology.

Peter Lewis is the executive director of Essential, a progressive research and communications company and the director of the Centre for Responsible Technology.

View post:
Artificial intelligence won't rule the world so long as humans rule AI - The Age

Artificial Intelligence checks influence Corona policy has on the infection explosion – Innovation Origins

Eindhoven-based start-up Fruitpunch AI is developing a system that uses artificial intelligence to analyze how effective the Dutch governments policy is in combating Corona virus infections.

Over the next two months, the platform from the AI engineers will collect data that will provide insight into how citizens have behaved in response to measures taken to stop the disease. In addition, the platform is to analyze the impact of the pandemic on the economy. It will do this by comparing the increase in the number of Corona patients with the fall in stock market prices, among other things. On this basis, it is possible to assess the probability of a recession as a result of the pandemic. And predict how deep it might become.

In turn, the effects of recession have an impact on citizens health and well-being. Due to loss of work, income, exposure to stress, debt, cardiovascular disease and so on.

A relationship between these various trends has already been proven by scientists. Theyve previously published articles on this subject in e.g. the renowned medical scientific journal The Lancet. The researchers also aim to relate this to the governments anti-Corona policy.

The idea to start this AI Fruitpunch against covid-19 challenge came from the AI Fruitpunch founders Buster Franken and Vincent Fokker. The reason was that we had to cancel our events because there were going to be more than a thousand people, says Buster Franken. Back then, that was the limit that the government had announced for the number of visitors when deciding whether an event could go ahead or not. I thought that was strange. Its unclear as to what risks it could reduce. I then googled to find out more about how the pandemic was unfolding. I saw pretty quickly that there are patterns in the rates of infection.

As an example, Frankens conclusion is that the Chinese government has failed in its policy to combat the disease. You see that the number of people infected with Corona has never increased as rapidly in any other country as it did in China. Thats now dropping because its getting warmer. But in actual fact, the Chinese government has failed.

What I want to know is how many infected people are causing a snowball effect which makes it impossible to control the pandemic. Plus there arent nearly enough beds in ICUs for people to recover. If you know that, you are then able to communicate effectively as a government. You can say to the population: we are taking certain measures to prevent infection because we now have so many patients. We dont want to reach the level that will cause an exponential increase in the number of infections beyond what hospitals are capable of coping with. If you say that, people will have a better understanding of why they need to stay away from certain events as much as possible.

In two weeks time, Fruitpunch will embark on a second challenge in collaboration with Omdena, a Silicon Valley-based global platform for AI engineers, a partner organization of AI Fruitpunch. Subsequently, the AI engineers will analyze the effects of government measures around the world on the rise in the number of Corona patients. This will likely also reveal the effect of having children at home compared to the effect of letting them go to school. This is a discussion point in The Netherlands. Since in other countries children are obliged to stay home. Like in neighboring such as Belgium and France.

You can see that there wasnt a game plan in place for the outbreak of a virus. And that every country is doing something different, says Franken. Which in itself is pretty ridiculous seeing that something like this is likely to happen.

The fact that the World Health Organization (WHO) does not have any such plan in place is not surprising, Franken says, considering it has no say in the health policies of the various member countries. But we do intend to cooperate with the WHO on this.

Franken adds that the data thats needed to make the various analyses is fairly easy to find. Both economic and medical. Were using the WHO updates. Along with an earlier study of the relationship between a recession and public health in Brazil. We adjust the results to the variances between Brazil and the other countries. For example, such as such as public access to healthcare.

Read all our other IO articles on the COVID-19 here.

Continued here:
Artificial Intelligence checks influence Corona policy has on the infection explosion - Innovation Origins

Artificial intelligence And Smartphone Photography: How Tech Makes You Look Like A Pro (infographic) – Digital Information World

Have you noticed lately how talented everyone you know is at taking pictures? There used to be skill involved in photography, but thanks to theArtificial intelligence (AI) in your smartphones camera everyone can be Annie Leibovitz. But just how does that technology work to reduce the blur from your shaking hands or fix the lighting when you are taking photos in dark places?Artificial Intelligence Mimics Human SightIn our brains there are all kinds of things happening to ensure we can see clearly. Our brains filter out tiny movements and adjust the processing of the images so that they appear focused and make sense. But when you have a normal camera in your shaky hands those images often come out blurred. Artificial intelligence algorithms within your smartphone camera can adjust for your shaking hands, but thats not all. It can also adjust for poor lighting conditions, including darkness, it can change scene modes according to what youre taking a picture of, and it can detect faces and ensure eyes are open and people are smiling.

Artificial intelligence can also take multiple photos within a few milliseconds and stitch together the best parts of each photo for one really excellent composite photo. High dynamic range combines the best elements of three photos, while Top Shot actually takes a short video and combines the best elements into one photo.

In addition to these features, your smartphone camera can blur the background in a portrait, smooth your wrinkles, hide your blemishes, and more. 42% of Americans choose Portrait Mode, which combines background blur with beautification for the perfect selfie every time.

There are also apps that can further enhance your smartphone photos and videos:

When you choose your next smartphone, how important is the camera inside? Learn more about smartphone cameras and all the apps and add-ons that can turn you into a professional photographer from the infographic below. Are you ready to become the next Ansel Adams?

Read next: A comparison between properties of the most used image file types (infographic)

Read more:
Artificial intelligence And Smartphone Photography: How Tech Makes You Look Like A Pro (infographic) - Digital Information World

Global smart-city artificial intelligence software revenue set to rise sevenfold by 2025, spurred by advancing AI and connectivity technologies -…

12 Mar 2020: The advent of 4G and 5G internet of things (IoT)-based connectivity is spurring the online migration of smart-city applications, helping generate a more than sevenfold increase in smart-city artificial intelligence (AI) software revenue by 2025.

The global smart-city AI software market is set to soar to $4.9 billion in 2025, up from $673.8 million in 2019, according to Omdia. Wireless data communications standards are enabling smart-city applications to move into the online realm, where they can capitalize on the latest AI innovations. The growing capabilities of AI are enabling data and insights collected through IoT networks to be monitored, analyzed and acted upon.

From video surveillance, to traffic control, to street lighting, smart-city use cases of all types are defined by the collection, management and usage of data, said Keith Kirkpatrick, principal analyst for AI at Omdia. However, until recently, connecting disparate components and systems together to work in concert has been challenging due to the lack of connectivity solutions that are fast, cost effective, low latency and ubiquitous in coverage. These challenges now are being overcome by leveraging advances in AI and connectivity.

The arrival of 4G and 5G wireless data technologies is making it easier to collect and manage data, promoting the migration of smart-city AI software to the online realm. AI allows data to be analyzed more deeply than ever before. The technology can identify patterns or anomalies within that data, which then can be employed for tasks that allow machines to mimic what humans might consider to be intelligence.

Using the power of AI, smart-city systems can create municipal systems and services that not only operate more efficiently, but also provide significant benefits to workers and visitors. These benefits can come in many forms, including reduced crime, cleaner air, more orderly traffic flow and more efficient government services, as detailed by the latest Omdia AI research report Artificial Intelligence Applications for Smart Cities.

One example of how smart cities are leveraging AI is in the video surveillance realm.

When hosting public events, some cities are beginning to use video cameras that are mated to AI-based video analytics technology. The goal is to have AI algorithms scan the video and look for behavioral or situational anomalies that could indicate that a terrorist act or other outbreaks of violence may be about to occur.

However, cities are increasingly employing cloud-based AI systems that can search footage from most closed-circuit TV (CCTV) systems, allowing the platform and technology to be applied to existing camera infrastructure. Furthermore, video surveillance can be combined with AI-based object detection to perform tasks including learning patterns in an area; detecting faces, gender, heights and moods; reading license plates; and identifying anomalies or potential threats, such as unattended packages.

As the use of surveillance cameras has exploded, AI-based video analytics now represent the only way to extract value in the form of insights, patterns, and action from the plethora of video data generated by smart cities.

Originally posted here:
Global smart-city artificial intelligence software revenue set to rise sevenfold by 2025, spurred by advancing AI and connectivity technologies -...