Archive for the ‘Artificial Intelligence’ Category

AI is already helping astronomers make incredible discoveries … – Space.com

World Space Week 2023 is here and Space.com is looking at the current state of artificial intelligence (AI) and its impact on astronomy and space exploration as the space age celebrates its 66th anniversary. Here, Paul Sutter discusses how AI is already helping astronomers make new, incredible discoveries.

Whether we like it or not, artificial intelligence will change the way we interact with the universe.

As a science, astronomy has a long tradition of looking for patterns by sifting through massive amounts of data, accidental discoveries, and a deep connection between theory and observation. These are all areas where artificial intelligence systems can make the field of astronomy faster and more powerful than ever before.

That said, it's important to note that "artificial intelligence" is a very broad term encompassing a wide variety of semi-related software tools and techniques. Astronomers most commonly turn to neural networks, where the software learns about all the connections in a training data set, then applies the knowledge of those connections in a real data set.

Related: How artificial intelligence is helping us explore the solar system

Take, for instance, data processing. The pretty pictures splashed online from the Hubble Space Telescope or James Webb Space Telescope are far from the first pass that those instruments took of that particular patch of sky.

Raw astronomical images are full of errors, messy foregrounds, contaminants, artifacts, and noise. Processing and cleaning these images to make something presentable not to mention useful for scientific research requires an enormous amount of input, usually done partially manually and partially by automated systems.

Increasingly astronomers are turning to artificial intelligence to process the data, pruning out the useless bits of the images to produce a clean result.For example, an image of the supermassive black hole at the heart of thegalaxy Messier 87 (M87) first released in 2019 was given a machine learning "makeover" in April 2023, resulting in a much clearer image of the black hole's structure.

In another example, some astronomers will feed images of galaxies into a neural network algorithm, instructing the algorithm with the classification scheme for the discovered galaxies. The existing classifications came from manual assignments, either by the researchers themselves or by volunteer citizen science efforts. Training set in hand, the neutral network can then be applied to real data and automatically classify the galaxies, a process that is far faster and much less error prone than manual classification.

Astronomers can also use AI to remove the optical interference created by Earth's atmosphere from images of space taken by ground-based telescopes.

AI has even been proposed to help us spot signatures of life on Mars, understand why the sun's corona is so hot, or reveal the ages of stars.

Astronomers are also using neural networks to dig deeper into the universe than ever before. Cosmologists are beginning to employ artificial intelligence to understand the fundamental nature of the cosmos. Two of the biggest cosmic mysteries are the identities of dark matter and dark energy, two substances beyond our current knowledge of physics that combined take up over 95% of all the energy contents throughout the universe.

To help identify those strange substances, cosmologists are currently trying to measure their properties: How much dark matter and dark energy there is, and how they've changed over the history of the universe. Tiny changes in the properties of dark matter and dark energy have profound effects on the resulting history of the cosmos, touching everything from the arrangement of galaxies to the star formation rates in galaxies like our Milky Way.

Neural networks are aiding cosmologists in disentangling all the myriad effects of dark matter and dark energy. In this case, the training data comes from sophisticated computer simulations. In those simulations cosmologists vary the properties of dark matter and dark energy and see what changes. They then feed those results into the neural network so it can discover all the interesting ways that the universe changes. While not quite yet ready for primetime, the hope is that cosmologists could then point the neural network at real observations and allow it to tell us what the universe is made of.

Approaches like these are becoming increasingly critical as modern astronomical observatory churn out massive amounts of data. The Vera C. Rubin Observatory, a state-of-the-art facility under construction in Chile, will be tasked with providing over 60 petabytes (with one petabyte equaling one thousand terabytes) of raw data in the form of high-resolution images of the sky. Parsing that much data is beyond the capabilities of even the most determined of graduate students. Only computers, aided by artificial intelligence, will be up to the task.

Of particular interest to that upcoming observatory will be the search for the unexpected. For example, the astronomer William Herschel discovered the planet Uranus by accident during a regular survey of the night sky. Artificial intelligence can be used to flag and report potentially interesting objects by identifying anything that doesn't fit an established pattern. And in fact, astronomers have already used AI to spot a potentially dangerous asteroid using an algorithm written specifically for the Vera C. Rubin observatory.

Who knows what future discoveries we will ultimately have to credit to a machine?

Go here to see the original:
AI is already helping astronomers make incredible discoveries ... - Space.com

Domino’s and Microsoft are working together on artificial intelligence – Restaurant Business Online

Domino's plans to start testing some AI strategies within the next six months. | Photo courtesy of Domino's

Dominos and Microsoft want to use AI to improve the pizza ordering process.

The Ann Arbor, Mich.-based pizza chain and the Redmond, Wash.-based tech giant on Tuesday announced a deal to work together on AI-based strategies to improve the ordering process. Dominos expects to test new generative AI-based technology in its stores within the next six months.

The companies said they would use Microsoft Cloud and the Azure OpenAI Service to improve the ordering process through personalization and simplification.

Dominos has already been experimenting with AI to modernize store operations. The company said that it is in the early stages of developing a generative AI assistant with Azure to help store managers with inventory management, ingredient ordering and scheduling.

The company also plans to streamline pizza preparation and quality controls with more predictive tools. The idea is to free store managers time so they work more with employees and customers.

Our collaboration over the next five years will help us serve millions of customers with consistent and engaging ordering experiences, while supporting our corporate stores, franchisees and their respective team members with tools to make store operations more efficient and reliable, Kelly Garcia, Dominos chief technology officer, said in a statement.

Dominos and Microsoft plan to establish an Innovation Lab pairing company leaders with world class engineers to accelerate the time to market for store and ordering innovations. The companies also say they are committed to responsible AI practices that protect customer data and privacy.

As consumer preferences rapidly evolve, generative AI has emerged as a game changer for meeting new demands and transforming the customer experience, said Shelley Bransten, VP global retail, consumer goods and gaming with Microsoft.

Artificial intelligence has become increasingly common inside restaurants, with chains using the technology to take orders, do back-of-house tasks and make recommendations to customers. Large-scale chains in particular are in something of an arms race to find more uses for AI inside their restaurants to lower labor costs and improve customer service.

Members help make our journalism possible. Become a Restaurant Business member today and unlock exclusive benefits, including unlimited access to all of our content. Sign up here.

Restaurant Business Editor-in-Chief Jonathan Maze is a longtime industry journalist who writes about restaurant finance, mergers and acquisitions and the economy, with a particular focus on quick-service restaurants.

Read the original:
Domino's and Microsoft are working together on artificial intelligence - Restaurant Business Online

Speaker Lectures on artificial intelligence The Collegian – SDSU Collegian

Arijit (Ari) Sen encouraged the use of Artificial Intelligence to augment reporters and humans, but not to replace them during his Pulitzer Center Crisis Reporting Lecture held on Sept. 28, at the Lewis and Clark room located in South Dakota State Universitys Student Union.

Sen, an award-winning computational journalist at The Dallas Morning News and a former A.I. accountability fellow at the Pulitzer Center, discussed ways A.I. could be used and potential risks of A.I. in journalism.

A.I. is often vaguely defined, and I think if you listen to some people, its like the greatest thing since slice bread, Sen said as he proceeded to quote Sundar Pichai, CEO of Googles parent company, Alphabet, stating A.I. as the most profound technology humanity is working on.

According to Sen, A.I. is basically machine learning that teaches computers to use fancy math for finding patterns in data. Once the model is trained, it can be used to generate a number and predict something by putting things into categories.

Sen feels that a rather important question to focus on is how A.I. is being used in the real world and what are the real harms happening to people from this technology.

There is a really interesting thing happening right now. Probably about since 2015, A.I. is starting to be used in investigative journalism specifically, Sen said, as he speaks about a story reported by the Atlanta Journal-Constitution (AJC) on doctors and sex abuse, where around 100,000 disciplinary complaints had been received against doctors. Due to numerous amounts of complaints, AJC used a machinery model to train and feed in data about complaints that were related to sexual assault and those that were not related to make it easier for them and compile a story.

Although, A.I. can prove to be useful for investigative journalism, Sen explained about the risks of A.I. technology and questions pertaining to people behind the model. He talked about factors about people who label data, intentions of the A.I. creator and humans working on the same content with a longer time frame.

The other question we need to think about when working with an A.I. model is asking if a human could do the same thing if we gave them unlimited amount of time on a task, Sen said. And if the answer is no, then what makes us think that an A.I. model could do the same thing.

Sen further elaborated on A.I. bias and fairness by bringing in another case study of how Amazon scrapped its secret A.I. recruiting tool after it showed bias against women. Amazon used its current engineers resume as training data to recruit people; however, they realized that most of their existing engineers were men which caused the A.I. to have a bias against women and rank them worse than male candidates.

One of the cool things about A.I. in accountability reporting is that were often using A.I. to investigate A.I., Sen said as he dives into his major case study on the Social Sentinel.

Sen described Social Sentinel, now known as Navigate360, an A.I. social media monitoring technology tool used by schools and colleges to scan for threats of suicides and shootings.

Well, I was a student, just like all of you at University of North Carolina at Chapel Hill (UNC) and there were these protests going on, Sen said. You know, I being the curious journalist that I was, I wanted to know what the police were saying to each other behind the scenes.

Sens curiosity led to him putting in a bunch of records requests and receiving around 1,000 pages in the beginning. He ended up finding a contract between his college and Social Sentinel that led him to wonder about his college using a sketchy A.I. tool. Sen landed an internship at NBC and wrote the story which had been published in Dec. 2019.

Around that time, I was applying for journalism at grad school, and I mentioned this in my application at Berkeley, Sen said. I was like, this is why I want to go to grad school; I want two years to report this out because I knew that straight out of undergrad no one was going to hire me to do that story.

He recalls that he spent his first year doing a clip search on reading about Social Sentinel and found out about no one looking at colleges, which he stated was weird as the company had been started by two college campus police chiefs. The remainder of time he spent was calling colleges and writing story pitches.

Sen added details on his second year at Berkeley, where he was paired up with his thesis advisor David Barstow and conducted tons of record requests from all over the country for at least 36 colleges and every four-year college in Texas.

We ended up with more than 56,000 pages of documents by the end of the process, Sen exclaimed.

After having all documents prepared, Sen went on to build databases in spreadsheets, and analyzed Social Sentinels alerts sent as PDFs. He later began analyzing tweets to check for threatening content and look for common words after filtering out punctuation and common words.

You can see the most common word used was shooting and you can see that would make sense, Sen said. But a lot of times shooting meant like shooting the basketball and things like that.

With all this information acquired, Sen got going on speaking with experts, former company employees of Social Sentinel, colleges that used the service, students and activists who were surveilled.

Through this reporting, Sen came up with three findings. One, being the major was that the tool not really being used effectively to prevent suicide and shootings but was used to monitor protests and activists. Second, Social Sentinel was trying to expand beyond social media such as Gmail, Outlook etc. Lastly, the tool showed little evidence of lives saved, although Social Sentinel claimed that they were doing great.

Sen concluded that the impact of the story reached various media houses who later published on A.I. monitoring student activities and eventually, UNC stopped using the service. Sen later took on questions from the audience.

According to Joshua Westwick, director for the School of Communication and Journalism, the lecture was timely, especially considering the increased conversations about AI.

Ari Sens lecture was both engaging and informative. The examples that he shared illuminated the opportunities and challenges of AI. Westwick said. I am so grateful we could host Ari through our partnership with the Pulitzer Center.

Westwick further explained that the lecture was exceptionally important for students and attendees as A.I. is present throughout many different aspects of our lives.

As journalists and consumers, we need to understand the nuances of this technology, Westwick said. Specifically, for our journalism students, understanding the technology and how to better report on this technology will be important in their future careers.

Greta Goede, editor-in-chief for the Collegian, described the lecture as one of the best lectures she has attended. She explained how the lecture was beneficial to her as Sen spoke about investigative journalism and how to look up key documents before writing a story.

He (Sen) talked a lot about how to get data and how to organize it, which was really interesting to me since I will need to learn those skills as I go further into my career, Goede said. I thought it was a great lecture and enjoyed attending.

More:
Speaker Lectures on artificial intelligence The Collegian - SDSU Collegian

Artificial Intelligence as a Tool of Repression – Tech Policy Press

Audio of this conversation is available via your favorite podcast service.

Every year, the think tank Freedom House conducts an in-depth look at digital rights across the globe, employing over 80 analysts to report on and score the trajectory of countries on indicators including obstacles to access, limits on content, and violations of user rights.. This years iteration is headlined Advances in Artificial Intelligence Are Amplifying a Crisis for Human Rights Online.

Once again this year, I spoke to two of the reports main authors:

What follows is a lightly edited transcript of the discussion.

Allie Funk:

My name is Allie Funk, Im research director for technology and democracy at Freedom House and, luckily, a co-author of this years Freedom on the Net report.

Kian Vesteinsson:

Great to be here with you, Justin. My names Kian Vesteinsson, Im a senior researcher for technology and democracy at Freedom House and also a co-author of this years report.

Justin Hendrix:

This is the second year Ive had you all on the podcast to talk about this report, which has been going now for 13 years, lucky 13, but unfortunately, its not good news. Weve got the 13th consecutive year that you report of declines in internet freedom.

Allie Funk:

I keep saying Ive been doing this report for six years, one of these years, I cant wait to say that internet freedom improved for the first year and we can have a beautiful positive essay, but this year is not the year, which I dont think will be surprising for a lot of your listeners. Global internet freedom, yeah, the decline for the 13th consecutive year and we actually found that attacks on free expression have grown more common around the world, so we have two record-high assaults on free expression, one around folks who faced arrest for simply expressing themselves, we had more at least 55 different countries in which that was happening, and then also a record high in the number of governments, so 41, that blocked political, social, and religious content.

We dive into that in the report, we also highlight the ways in which artificial intelligence is deepening this crisis for internet freedom, zooming in here on disinformation campaigns and censorship, and we close the report really calling for the need to regulate AI, because we do think AI can be used for good when it is regulated appropriately and used appropriately, but also, closing with the democracy, communities need to learn the lessons of the past decade of internet governance challenges to apply to AI and also to not lose momentum in protecting internet freedom more broadly.

Kian Vesteinsson:

That is the headline on this years report, The Repressive Power of Artificial Intelligence. Lets talk a little bit about that focus. Clearly, thats the hot topic of the moment, folks are concerned about generative AI, other forms of artificial intelligence that may be used as tools of repression, tools of censorship in particular. I happen to be at the Stanford Internet Observatories Trust and Safety Research Conference while were recording this, theres a lot of discussion here, of course, about generative AI and about artificial intelligence more generally, and how it will be deployed across the internet in various content moderation efforts. Theres one way of thinking about that that is positive, that perhaps we can help ameliorate problems like hate speech, harassment, and child sexual abuse, et cetera. On the other hand, youve got authoritarians who want to use these tools for very different purposes, indeed.

Allie Funk:

Absolutely. There is a lot of benefit in the use of automated systems within content moderation, and Im sure that is one of the conversations at the conference. The type of material that human moderators have to review is so graphic and traumatizing, so theres a way here that automated systems can help ease that and theyre also really necessary to detect influence operations, but one of the few things that we dived in on AI is, first, if you want to make the point that AI has, for many years, been exacerbating digital oppression, especially as it relates to surveillance, and thats not something we zoomed in on this years report, but I think its really important to say that upfront. As we looked at generative AI and the ways in which its going to impact information integrity, we think about it as there are longstanding networks of pro-government commentators around the world, so in the 70 countries we cover, we found that at least 47 governments deployed folks to do this. That can be hiring external, shady public relations firms to do that work, it might mean working with social media influencers.

Kyrgyzstan hires high schoolers to do this, which is super interesting to me, I guess Gen Z is really great at knowing whats going on online. My millennial self, not always that case. These networks have existed for a really long time. We view gen AI as these networks will increase their reliance on these tools and, what it does, it lowers the barrier to create disinformation and then these existing networks will then be able to disseminate it at scale. Its really about that creation part and we found 16 different examples in countries where gen AI was already used to distort political and social-religious content, so we already see it happening, but we expect its going to get a lot worse in the coming years.

Justin Hendrix:

Give me a few more examples of some of those 16 countries. What are you seeing with regard to the use of generative AI?

Kian Vesteinsson:

I think its really important to set the stage here and acknowledging that we found 16 examples of this, but there are likely more. Its very hard to identify when content has been generated by one of these AI tools. The lack of watermarking practices within the market is especially exacerbating this when it comes to images and other media content. Of course, telling the difference between text thats been written by a human being and text thats been generated by ChatGPT or another AI chatbot is becoming increasingly difficult. We expect that this is an under count and thats just the limitation of the research thats possible into this. Now, lets talk about examples. In Nigeria, there were really consequential elections in February 2023. The current leader was term-limited, so the elections were very highly contested. We found that an AI-manipulated audio clip spread on social media allegedly showing the opposition presidential candidate talking about plans to rig the balloting and this content really threatened to inflame a sensitive situation to fuel partisan animosity and longstanding doubts about election integrity in a country thats seen really tragic election-related violence in the past.

Then theres Pakistan. This is an incredibly volatile political environment, the country has seen a yearlong plus political crisis stemming from former prime minister, Imran Khans, feud with the military establishment. That crisis has seen Imran Khan arrested multiple times, violent protests breaking out in response, and all sorts of really brutal repression targeting the supporters of Imran Khan. Imran, in May of this year, posted a video clip on social media that featured AI generated imagery of women standing up to the military in his favor. This sort of content really degrades the information environment in really volatile times, and quite frankly, the people need to be able to trust the images that theyre seeing from political leaders in times of crisis like this.

One final example for you. Ahead of the Brazilian presidential runoff in October 2022, a deepfake video circulated online and this video falsely depicted a trusted news outlet showing that then President Bolsonaro was leading in the polls. Again, this is information that is masquerading as coming from a trusted source, one that people may be inclined to believe, and videos like these have the potential to contribute to a decaying information space and have a very real impact on human rights and democracy.

Justin Hendrix:

You point out that one of the things were seeing, of course, is a commercial market for political manipulation and, in some cases, the use of generative AI tools in order to do that manipulation, and appears, I suppose, based on prior reports, Israels emerging as one of the homes to this marketplace.

Allie Funk:

Its not just with regards to disinformation, too. I think this is whats really interesting about digital oppression more broadly, the ways in which the private sector bolsters those efforts and, because of booming markets for these repressive tools, its lowered the barrier of entry for digital oppression generally, so this relates to surveillance and the spyware market, which I think has gotten a lot of really important attention and thats where companies from Israel are really thriving. That, in part, has to do with the relationship between the Israeli state and the security forces and the private sector and the ways in which learning across those sectors occurs, but were seeing increasingly, also, the disinformation market and the rise of these really shady public relation firms. We highlight, Israel is home to a lot of these, within the report. There was this fantastic reporting from Forbidden Stories and a few others that Just wonderful work of journalists going undercover to unravel what is called Team Jorge and the ways in which this group of folks in this company really have been able to bombard the information space and sell their services to governments.

One of the really concerning examples that we highlight is the ways in which Team Jorge was working with somebody in Mexico, its always hard to get attribution here, and they used fake accounts and manipulated information to really orchestrate a false narrative around these really serious allegations of a Mexican official and his involvement within human rights abuses, fabricating evidence and criminal campaigns. A really tangible example there. Interesting enough, a lot of these companies are also popping out from the UK and other democracies as well, which speaks to the opportunity for democracies to really respond and try to clamp down on these sectors, especially in the spyware market, the way entity lists and export controls can play a role here.

Kian Vesteinsson:

I think one dynamic thats really important to talk about here We talk a lot about dual-use technologies when were talking about spyware and surveillance products. Dual-use here referring to technologies that may have appropriate uses that are not rights abusive and that can then be leveraged for military and law enforcement uses in really problematic ways. Its really important to acknowledge that a lot of the private sector disinformation, content manipulation trends that were seeing are driven by companies that create completely innocuous products that may have completely legitimate uses. Team Jorge is an example on the far end of the spectrum. These are incredibly shady political fixers who are doing really dangerous stuff, but then there are companies like Synthesia, this UK-based generative AI company that creates AI-generated avatars that can read a script in a somewhat realistic way. Obviously, theres lots of completely legitimate uses for this. You could imagine it being deployed for spicing up a company compliance training or something along these lines, but weve seen products generated by Synthesia leveraged for disinformation.

Prominently, in early 2023, Venezuelan state media outlets used social media to post videos produced by Synthesia that featured pro-government talking points about the state of the Venezuelan economy, which is in free fall. The economy is having incredibly devastating effects on peoples ability to survive, so having a somewhat plausible looking set of avatars masquerading as an international news outlet talking about how good the Venezuelan economy is could meaningfully shape peoples perceptions of whats going on. These products that may be relatively innocuous can have really pernicious effects and thats really one lesson that regulators need to be taking away from all of this.

Generative AI, AI generally grabs the headline in this report, but of course, youve already mentioned that more conventional censorship methods are on the rise as well. Lets talk a little bit about that, the record high youve already mentioned, the 41 national governments that have blocked websites, various other tactics that you look at here. From blocks on VPNs, forced removals of content, entire bans on social media platforms.

Allie Funk:

AI does take the big framing here, but I think the research was extremely clear in the ways in which AI is impacting censorship, but also how AI is not replacing these conventional forms of censorship. I think theres a couple of different reasons why thats the case. One specific one that is on my mind a lot is that AI just isnt always the most effective in certain conditions. For instance, when theres these massive outpourings of dissent around protests, really sophisticated AI-empowered censorship or filtering tools, they cant keep up. They cant. You see that in Iran and even China. We continually will have governments, we think, resort to very traditional tactics of censorship, which I think is why its so important that we dont always just get distracted by AI or only focus on AI.

Kian Vesteinsson:

Its a lot easier to lock up protestors than to give instructions to social media platforms about how they need to shape their content moderation algorithms to respond to crises that are breaking out live. We saw this certainly in China. Its, of course, worth noting here that China confirmed its status as the worlds worst environment for internet freedom, thats now for the ninth year running, though Myanmar did come close to surpassing China and is now the worlds second-worst environment for internet freedom. In China, we saw the Chinese people show inspiring resilience over the past year. As Im sure listeners of your show will recall, in November 2022, protests over a terrible fire in Xinjiang grew into one of the most open challenges to the CCP in decades. People across the country were mobilizing against Chinas harsh zero-COVID policy, and this included a lot of folks who took to the internet to express dissent in ways that many may not have been comfortable doing before the protests.

Many people took to VPNs to hop the great firewall and express information about what was going on in China on international platforms, like Twitter and Instagram, that are usually restricted in China. The big story here is that, in the immediate aftermath of the protest, information about people protesting spread online on Chinese social media platforms, censors were not able to respond to the scale of the content that was being posted, and in part because Chinese internet users were using clever terms and masked references to the protest to express their descent. Now, unfortunately, the censors caught up, they scrubbed away criticism of the zero-COVID policy, imposed even harsher restraints over social media platforms in the country, and started cracking down even more heavily on access to VPNs. This mass movement of protesters did push the government to make a very rare withdrawal of its policy at a nationwide level. Conditions in China still remain very dire, censorship and surveillance are imposed in mass and systematically, but the Chinese people have found these narrow ways to express their dissent and challenge the governments line.

Justin Hendrix:

Lets briefly touch on Iran as well, which is another country that has seen, of course, a year of mass protests. How has Iran fared in this years measurement?

Kian Vesteinsson:

I was thinking about it this morning, Justin. I think a year ago, we were here talking about the protests in Iran and what it meant for protestors to have access to all sorts of censorship circumvention tools to raise their voices online at a real critical time, and that remains the case, but unfortunately, Iran saw the sharpest decline in internet freedom over the past year in our reporting. The death in custody of Jina Mahsa Amini sparked nationwide protests back in September of last year that continued over the ensuing months. In response, the regime restricted internet connectivity, blocked WhatsApp and Instagram, and arrested people on mass.

Now, again, protestors still took creative measures to coordinate safely using digital tools and to raise their voices online, and I think theres a very inspiring story here about the resilience of people in one of the most censored environments in the world, still finding ways to express their beliefs about human rights, and to challenge the Iranian government. Now, unfortunately, I think its also worth noting here that state repression in Iran was sprawling and not solely limited to the protest themselves. Two people were executed in Iran over the past year for allegedly expressing blasphemy over Telegram, and we also saw authorities pass a sprawling new law that actually expands their capacity for online censorship and surveillance through some very sneaky procedural measures that meant that it didnt get enough scrutiny from the limited ways that Iranian people can participate in legislative processes.

Justin Hendrix:

I want to touch on another country, India, which is one weve talked about in the past. In discussing this report, prior iterations of it, seems like things have continued to degrade there as well.

Kian Vesteinsson:

Thats unfortunately the case. Now, again, I think its really important to say that India is one of the worlds most vibrant democracies. People really participate in political processes both online and off in really engaged ways. But the government has taken increasing steps to expand its capacity to control online content over the past year. A couple of years ago, the government passed new rules, known as The IT Rules, which essentially empower authorities to order social media platforms to remove certain kinds of content under, so-called, emergency measures. Over the past year, the government took those emergency powers to restrict access to online content criticizing the government itself.

In one very prominent case earlier in 2023, authorities restricted YouTube and Twitter from displaying a documentary about Indian Prime Minister, Narendra Modis, government background from viewers in India. The documentary looked at Modis tenure as chief minister of Gujarat, during which he presided over a period of intense unrest and communal violence, and quite frankly, its important for people in India to be able to understand the political history of their prime minister and access information about his government decisions. The Indian governments decision to restrict access to that information, within India, cut people off from some really important context. We also saw Indian authorities continue their practice of issuing incredibly over-broad and long-term internet shutdowns during times of crisis. Just recently, in the past month, authorities in the state of Manipur ended a 150-day-long internet shutdown that was first imposed in May.

Manipur saw some really tragic and egregious violence during that period, stemming from a conflict between two tribal communities and, in times of conflict, its really important that people are able to access the internet to coordinate on their own safety and get access to health information when they need it. Shutdowns like this are incredibly restrictive and have really severe impacts on people in a time of crisis.

Allie Funk:

I might just chime in here on why India is so important in the global fight of internet freedom and beyond the fact that its home to Is it the largest market now of internet users? Maybe the second or third, my stats in my head are a little Yeah, after China. It is such an important country when were thinking about how to resist and how to respond to Chinas digital authoritarianism and the Chinese governments efforts to propagate their model of cyber sovereignty abroad. It serves as a really important regional lever here. We like to talk about India, also Brazil, Nigeria, these swing states of internet freedom, because they oscillate between these two worlds, like Kian mentioned, about the vibrancy of the online space. Weve seen a lot, how the US government is trying to build better connections with Modi, BJP, and the Indian government broadly as a way to counteract China and have larger influence.

This is not just with India. Biden went to Vietnam, youre seeing this with Japan and South Korea as well, so India is part of this, but when thinking about I sometimes worry this increased effort to pull India in, but doing that at a time in which human rights are on the chopping block within India, is deeply concerning. How can we, as democracies, and the US, the European Union, other countries within BRICS like South Africa, how do we balance that, right? How do we pull India in, but also do a carrot-and-stick approach where youre also trying to incentivize it to improve its behavior? Because I do worry that, in some sense of pulling them in, were giving a pass for some of this. Especially ahead of general elections next year in India, which are going to be such a pivotal moment in Indian democracy and the future of it, we really need to think creatively about how to try to incentivize the government to improve its own standards.

Kian Vesteinsson:

The same goes for companies that operate in India. The next year is going to be really pivotal for online speech in the country as people head into an election that could very well represent a challenge to the ruling party, and as we all know, elections are a flash point for these sorts of online censorship. Social media companies that operate in India need to be prepared to face an escalating volume of content removal requests about sensitive content relating to political figures. Its very likely that people in power, prominent politicians from all parties, could seek to exercise control over platforms to remove critical speech. Its essential that the companies be prepared to push back against those requests, as much as is feasible within Indias legal framework, to ensure that people have access to information about the elections during the electoral period.

Justin Hendrix:

Not just India, but many other countries having elections, including notably the one that we are all sitting in, the United States. How does the United States fare on this ranking?

Allie Funk:

The United States, we rank it eighth. Its tied with Australia, France, and Georgia. Internet freedom, its trekking along in the United States. We are one of the leaders in the whole field of internet freedom, thats where some of the terminology actually comes from, the United States and Hillary Clinton when she was Secretary of State and really bolstered this approach. Continuing on that, I think there are two interesting tensions that happened when we think about the state of human rights online in the US. Internationally, this started last year, but also continued this year, the Biden administration has put internet freedom at the forefront of its foreign policy and its created really real results from that. Everything from creating a new cyber bureau in the state department that has its own digital freedom component. We now have a cyber ambassador, tech ambassador, Nathaniel Fick, the Biden administration has put millions in Congress as well, millions of dollars in programing around digital democracy.

Weve rolled out a really welcome executive order that limits the federal governments use of spyware under certain circumstances, so really impressive movement here when it comes from the administration. In the last How could I forget that weve been chair of the Freedom Online Coalition for the past year? The sticky point is that were not seeing that movement domestically, particularly as it relates to action from Congress. I think a little unsurprising, Congress has not been able to Has struggled with passing meaningful legislation on a whole host of issues, so its not surprising theyre lacking on Still dont have a privacy law, we still dont have a movement on platform responsibility and transparency, and its not from a lack of bills. There was a really fantastic that we endorsed, privacy law, that was proposed last Congress. Didnt make it through.

It was bipartisan, it had a lot of support from civil society, even some industry. Its just this really alarming juxtaposition and I think the way we think about it is In order for the United States to be a successful and effective advocate of internet freedom abroad, weve got to get our own house in order, and weve got challenges with disinformation proliferating online, youve got actors within particular parties that are driving that themselves. With this past year, I think one of the big things that changed is that you have so much internet regulation at the state level, some of which is good, you got a lot of good data privacy laws at the state level, but a lot of it is bad for free expression and access to information.

I think the biggest example of this just is in Montana. The state passed a bill that forces Apple and Google to remove TikTok from their app stores. Itll go into effect January, its been challenged from a constitutional perspective, so Im skeptical. Im not a lawyer, but Im skeptical itll go through, so well have to see. It just speaks to the ways in which states are stepping up to take these issues head on, but in a way that isnt always beneficial for internet freedom.

Justin Hendrix:

Im speaking to you on the morning, weve learned that Supreme Court will, in fact, take up the question of whether Florida and Texas must-carry laws are constitutional or not. A lot working through the courts, including the big Missouri versus Biden case, which will potentially also end up before the court, so quite a lot could change here in the next year.

Kian Vesteinsson:

Justin, I was really holding out hope that we would break the news to you on your podcast, about NetChoice getting cert, but thatll have to be at another time. Listen, I think these are some really big questions in the cases that youve alluded to, and certainly, I dont feel comfortable saying that we have the answers to what the court should do in either of these really big cases, but what Ill say here A couple of things, platforms have too much power, full stop, over what people can or cannot express online and, certainly, that power is concentrated, right now, in relatively few companies, but the must-carry requirements that we saw coming out of Florida and Texas that are the question in that choice are misguided. Theyre tying the hands of social media platforms to enforce their own terms of service and to do the critical work of reducing the spread of false and misleading content and hateful content online.

Whats really critical that governments at the state and federal level focus on legislative frameworks that push for transparency, ensure that researchers have access to data about how information spreads on the platforms and the mitigation measures that are in place, and its really critical that states and the federal government take action to limit what data can be collected and used in these sorts of content recommendation systems. Certainly, I think the concerns that are raised about the power of these platforms over peoples speech are very valid and really important to contest, and the sorts of discussions that are coming out of these cases are really important, but the Florida and Texas must-carry laws are simply the wrong approach to doing so.

Justin Hendrix:

So when I do a search for the term Good news in this report, I get no results found, but there must be some good news in the world, so please, leave my listeners with some hope that things might improve.

Allie Funk:

I think theres a lot of good stuff, actually, thats coming out and it often takes time for efforts to show up in the research. Let me say top line. Over the past two years It relates to what we were talking about, with the US and the Biden administrations efforts to incorporate internet freedom within its foreign policy. There has been a lot of really wonderful movement strengthening democratic coordination on these issues at the multilateral level, and those efforts take time to key results. Very specifically, one example we highlight is around the global market of spyware, theres been so much movement trying to reel back that market. Biden administrations executive order that limits the federal governments use of it is huge, because it sets standards for democracies around the world about what they should adopt at home.

The Biden administration has also put on the entity list NSO group and a few others, which is actually having a real impact then on the market. Were going to start seeing how entity lists export controls, Costa Ricas government called for a full moratorium on these tools, how that is going to play out, and I think theres already one really interesting example in which the Indian government was reported was looking for less powerful tools on spyware than NSO because of issues with reputation by using NSO. Thats one good movement, theres just been really great coordination between industry, government, and civil society on that. I think another area Id highlight is the ways in which digital activism and civil society advocacy is driving progress, and a lot of times, theyre pushing courts to behave in a way that protects human rights. We have plenty of examples this year where judiciaries, especially in countries that are partly free or not free, where you think the judiciary might be captured by a political party or captured by those in power, and its not, its still holding up and pushing back against what the governments doing.

A really good example, I think, is in Uganda, which Uganda, interesting enough, in our report over the past decade has declined the third most than any other country. Its just growing more repressive by the day almost. The constitutional court there repealed a section of the Computer Misuse Act that was used to imprison people. I think this is such a good example, because at the same time that the court did that, the government passed a really alarming law that imposes 20-year prison terms for people who share information about same-sex sexual content that we expect to be wheeled against the LGBTQ community in the country. It really leaves hope that the judiciary, theres space there. With all the doom and gloom, I hope that readers also see the positive story and we wanted to end the report on, hey, were not screwed over here.

Theres a lot of lessons learned that we have all collectively learned together about what works to protect internet freedom and candidly what doesnt work, and we call for governments and industry and civil society to internalize these lessons and make sure that we applaud them moving forward, especially as AI is exacerbating internet freedoms decline.

Justin Hendrix:

Allie, I want to indulge that optimism and yet, on the other hand, 13 years of things moving in the wrong direction, a lot of challenging choppy waters ahead, as a lot of big problems like climate change and even more profound inequality come along. One could read your reports and walk away with the sense that theres a valence to technology that is leading us towards a more repressive and authoritarian world, and I find myself concerned thats the case. Clearly, thats what this podcast is about on some level. How do you think about that?

Kian Vesteinsson:

Justin, I like the idea that this podcast is basically just talk therapy to work through this fundamental fear, which I share. I think it can be really hard to be optimistic about the future of the internet, but let me get personal here. I grew up online and I learned about who I was as a queer person because of having access to this enormous scope of discussion of people sharing their fundamental experiences of who they are and how they live life. That was really instrumental for me. I think that promise of access to information, of access to a greater understanding of who you are as a person still holds. There are still these spaces where people can turn to the internet to express themselves in ways that they cant necessarily do in their offline lives. Part of the goal of our project is to call attention to these contexts where censorship and surveillance is so extreme as to limit those fundamental acts of expression and security in the self, but where I really find optimism in all of this is that this promise is something that people are still mobilizing around.

Theres a really powerful example over the past year in Taiwan. Taiwan is one of the freest environments when it comes to human rights online in the world, certainly in the Asian Pacific region, but technology experts recently started noticing that there was a growing number of domain-level blocks that were being reported in the country. These sometimes crossed over into the internet that most of us spend our time on, Google Maps or on PTT, this very popular bulletin board in Taiwan, so they started looking into this. This technologist started diving into what was going on and what they found indicated that the Taiwanese authorities had implemented domain-level blocking and astounding number of cases that were not publicly reported, and they went to their internet regulators and said, Whats going on here? This is really concerning.

This prompted regulators to disclose a whole bunch of information about how law enforcement agencies operate this domain-level blocking system, this DNS blocking system, in cyber crime cases, sharing more information about what was going on with regard to this system and, in doing so, creating a greater avenue to push back on this sort of censorship system. Now, theres obviously legitimate reasons that governments may want to curtail the activities of cyber fraud websites and other forms of outright criminal content online, but its apparent that they do so in the daylight, that people understand what is going on and what systems are in place to ensure accountability and oversight. These technologists mobilized around this idea that they should know whats happening to the Taiwanese internet at the governments behest. Its little stories like this that tell me that people around the world still see this internet as a place of promise and something that theyre willing to fight for.

Allie Funk:

I might also just add, how long did it take us? Us, I was in, what? Middle school and high school, so Im using Us generously here, but how long did it take us as a community to talk about internet regulation? Took years, right? We thought, Oh, my God. Look how great these services are, and then social media came, Its going to be awesome, and then we were like, Oh, heck, I dont know if I can cuss on your podcast, but, Oh, heck. We need to do something, these platforms are running amok, et cetera, et cetera, Theyre leading online violence, theyre contributing to genocide, so on.

We have had generative AI on the scene since November, I know that AI regulation discussions have been going on for a long time before that, but the speed with which so many people are like, We need to regulate this, we need to figure out how to make it safe, fair, how to make sure its protecting human rights, not driving discrimination, and protecting the most marginalized communities, that is such a difference in how we approach technology, which speaks to weve learned a lot of stuff and were actually making a lot of progress. Even if its not yet coming up in Freedom on the Net, Im very hopeful that one day it will.

Justin Hendrix:

I suppose in the face of all of these declining statistics, I always say, Look for the helpers. You are two of them. I know, also, that this report is the product of many hands, including people all around the world and individuals who are, in fact, in some of these reports context. Can you just give my listeners a sense of the methodology briefly and how you put this thing together, so they have an idea of how it works?

Kian Vesteinsson:

I love to talk about our process and particularly this network of contributors. This year we worked with over 85 people to produce this report, and these are human rights activists, journalists, lawyers, folks who have lived experience in fighting for digital rights in the countries that we cover. For every country that we cover in this report, were working with a person whos based in that country where its safe. As you can imagine, theres a number of countries where it is not possible for Freedom House to safely partner with someone, so we work with a member of the diaspora community, someone whos working in exile, for example, to put out that country report.

Through a pretty extensive process that starts in January and February and wraps up around this time of year, we take a deep dive into conditions on the ground in each of the countries that we cover, and our focus here is on understanding peoples experience of access to the internet, restrictions on online content, and violations of their rights, like privacy. One part thats a really critical component here is making sure that we bring our researchers together to talk about the different developments that are happening in their countries, and we really look for opportunities to put the comparative part of our project up front, giving folks a chance to understand how developments in Thailand may relate to the fight for free expression in Indonesia. These comparisons can really strengthen the fight for internet freedom around the world, and its honestly a privilege for us to be able to facilitate folks sharing information around that through Freedom on the Net.

Justin Hendrix:

I want to thank the two of you for speaking to me again this year about this report and I will look forward to, hopefully, year number 14 being the year that things finally turn around. I think I said that last time we spoke, so Im not going to put too many eggs in that basket, but either way, Kian, Allie, thank you so much for speaking to me today.

Allie Funk:

Thanks so much, and thanks for all your great work at Tech Policy Press. Its a fantastic resource putting these issues on the map.

Kian Vesteinsson:

Its always a pleasure, Justin.

Justin Hendrix is CEO and Editor of Tech Policy Press, a new nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was Executive Director of NYC Media Lab. He spent over a decade at The Economist in roles including Vice President, Business Development & Innovation. He is an associate research scientist and adjunct professor at NYU Tandon School of Engineering. Opinions expressed here are his own.

See the rest here:
Artificial Intelligence as a Tool of Repression - Tech Policy Press

Artificial Intelligence and Cybersecurity: Key Topics at the 78th … – Mayer Brown

Recently, world leaders and key stakeholders gathered for the 78th session of the United Nations General Assembly (UNGA) to discuss global challenges with the goal of furthering peace, security, and sustainable development. A key topic of discussion was the digital revolution, focusing on the opportunities and challenges presented by artificial intelligence (AI), as well as the continued importance of strengthening global cybersecurity.

Throughout the UNGA, world leaders highlighted potential risks associated with AI. In US President Joe Bidens remarks to the UNGA, he stated that AI holds enormous potential and enormous peril and noted that [t]ogether with leaders around the world, the United States is working to strengthen rules and policies so AI technologies are safe before they're released to the public. UN Secretary-General Antnio Guterres also referred to AI as an emerging threat[] that requires new innovative forms of governance, with input from experts building this technology and from those monitoring its abuses. When speaking to the Security Council in July 2023, Secretary-General Guterres had provided a similar warning, stating that AI tools can also be used by those with malicious intent, such as for targeting critical infrastructure, disinformation and hate speech, and deepfakes, and that malfunctioning AI systems pose particular risks, for example, in the context of nuclear weapons and biotechnology.

World leaders called for new guardrails or governance frameworks to address these risks. Secretary-General Guterres announced the establishment of a High-Level Advisory Body on Artificial Intelligence, which will be comprised of government and private sector experts. This builds on Secretary-General Guterres prior backing of an international AI watchdog body similar to the International Atomic Energy Agency. The High-Level Advisory Body on Artificial Intelligence will be tasked with analyzing and developing recommendations for the international governance of AI. An interim report on AI governance is scheduled to be released at the end of 2023 and the recommendations finalized by mid-2024.

In addition to emphasizing the importance of AI governance, political leaders discussed the role of AI in accelerating the achievement of UNs Sustainable Development Goals, which focus on addressing poverty, inequality, climate change, environmental degradation, peace, and justice. US Secretary of State Antony Blinken, along with foreign ministers and secretaries of state from Japan, Kenya, Singapore, Spain, Morocco, and the United Kingdom, met with several private sector AI developers to discuss AIs potential in supplementing the UNs Sustainable Development Goals. The discussion noted the importance of partnership between the government, private sector, and stakeholders in responsibly harnessing AI to achieve these goals.

Beyond AI, concerns about global cybersecurity continued to be a theme at this years UNGA. For example, the US State Department led a side dialogue focused on securing cyberspace from significantly destructive attacks. Ambassador at Large for Cyber and Digital Policy Nathaniel C. Fick, who moderated the discussion, and Deputy Secretary of State Richard R. Verma focused on how member states could cooperate with each other to respond and recover from cyber attacks, as well as emphasizing the United States commitment to collaborating with other countries to strengthen cybersecurity. As this effort to enhance global collaboration on cybersecurity continues, nation states will need to determine the ways in which private sector entitiescritical infrastructure and cybersecurity firms, for examplewill play a role in this process.

The 78th session highlighted AI and cybersecurity as prominent global challenges, as well as important opportunities for cross-border collaboration between member states and the private sector. While these initiatives run in parallel with actions by governments in Europe, the United States, and other regions, they also may affect how individual countries approach AI and cybersecurity. Moreover, although similar discussions on AI are occurring in other international forasuch as the Organization for Economic Cooperation and Development and the G-7 (through the AI Hiroshima Process)the broad reach of the UN-led effort could allow it to have significant influence over global AI standards. Companies interested in global AI and cybersecurity policy will likely benefit from considering how the positions shared at the UNGA could inform key policy debates affecting their business in the months ahead.

See the original post:
Artificial Intelligence and Cybersecurity: Key Topics at the 78th ... - Mayer Brown