Archive for the ‘Artificial Intelligence’ Category

REVERSE ROLES: Harry Hurley Interviews MH on Harrison Podcast About Artificial Intelligence – TALKERS magazine

WPG, Atlantic City radio starHarry Hurley reverses roles with MH on this weeks installment of the award-winning PodcastOne series, The Michael HarrisonInterview. Actually, this weeks episode of the long-running podcast consists of provocative excerpts from Harrisons recent guest appearance (6/4) on Hurleys popular WPG morning show in which he was booked to discuss the technological and sociological implications of AI.This took place in conjunction with the release of the new Gunhill Road music video, Artificial Intelligence (No Robots Were Injured in the Production of this Song). Harrison co-wrote and performs lead vocals on the song with the venerable band which had its world premiere on WPG that morning and kicked off Harrisons Obsolete Slobs radio tour in support of the piece.The conversation is a no-holds-barred look at the implications beneficial and destructive of the remarkable new technology that is disrupting art, communications, and life here in the early decades of the 21stcentury and promises to have dramatic impact on the course of humanity going forward. Dont miss this! Listen to the podcast in its entirety here.

View original post here:
REVERSE ROLES: Harry Hurley Interviews MH on Harrison Podcast About Artificial Intelligence - TALKERS magazine

Should I Be Scared of Artificial Intelligence? – The Banner

Should I be scared of artificial intelligence?

My default position on any new technology is doubt and skepticism. Blame my Calvinist underpinnings for that but has the latest-greatest ever really lived up to the hype? Even the experts arent sure (or arent sharing) exactly how AI works. Could this possibly take off?

Its been more than a year since artificial intelligenceespecially generative artificial intelligencetook over technology news. Generative AI allows for untold amounts of information to be ingested by powerful computers that then can generate what appears to be original text or images based on requests (called prompts) from users.

If youve read anything about generative AI, you know about the massive investments being made and the innovations, efficiencies, and new worlds AI will open for us, but there are drawbacks too: the disruption well all be facing in our workplaces and, of course, the fakery AI is capable of. Maybe youve seen (or created yourself) samples of this technology in action.

For me, a turning point for my skepticism was a test offered by The New York Times to see if people could determine whether pictures were real or AI-generated. Im a visual guy and thought this would be easy. I failed miserably.

So, should we be scared? When its not clear what is real and what is not, were left to wonder, or worse, give up and just believe what we see. Yes, that is scary.

In a 2023 Atlantic article, philosopher Daniel C. Dennett calls people posing as someone other than their real selves counterfeit people. He makes a compelling argument that creating [or passing along] counterfeit digital people risks destroying our civilization. His solution? Treat counterfeit people like we do counterfeit currency.

Although he admitted it might be too late already, he argued for complete transparency of what has been created by AI and for making sure we have technology (smartphones, scanners, digital TVs, and so on) that can detect counterfeits. And then, just as importantly, we should make counterfeit content creatorsincluding tech company executives and technicianslegally liable for the lies they are telling with AI text and images.

Here is the original post:
Should I Be Scared of Artificial Intelligence? - The Banner

Apple will bring AI to devices and Siri in much anticipated OpenAI partnership – NPR

Apple software chief Craig Federighi, right, pictured with exec John Giannandrea, announced a partnership with OpenAI to bring AI features to its products. (AP Photo/Jeff Chiu) Jeff Chiu/AP hide caption

Apple is going all-in with artificial intelligence, announcing several new AI features and a partnership with ChatGPT-maker OpenAI. The company announced the deal at its Worldwide Developers Conference on Monday afternoon.

The highly anticipated AI partnership is the first of its kind for Apple, which has been regarded by analysts as slower to adopt artificial intelligence than other technology companies such as Microsoft and Google.

The deal allows Apples millions of users to access technology from OpenAI, one of the highest-profile artificial intelligence companies of recent years. OpenAI has already established partnerships with a variety of technology and publishing companies, including a multibillion-dollar deal with Microsoft.

OpenAI will be integrated into Apples digital assistant Siri, Apple software chief Craig Federighi said during the conference. That would allow people to ask for help with things like recipe ideas, room decorations or composing a story, Federighi said.

Suppose you want to create a custom bedtime story for your six-year-old who loves butterflies and solving riddles, Federighi said. Put in your initial idea, and send to ChatGPT.

The announcement comes as AI has experienced explosive growth, and some embarrassing setbacks. Chatbots and AI assistants have been beset with issues including hallucinations, plagiarism and incorrect or biased results. OpenAI itself has been embroiled in allegations of copying actor Scartlett Johanssons voice without her permission.

Apple is also at the center of an antitrust lawsuit filed by the Justice Department and 15 states. The government accuses Apple of abusing its power as a monopoly to push out rivals and keep customers using its products. Its unclear how Apples new partnership with OpenAI could play into this case.

Shortly after Apples announcement, OpenAI CEO Sam Altman posted on X, formerly known as Twitter, very happy to be partnering with apple to integrate chatgpt into their devices later this year! think you will really like it.

Apple is also rolling out what it calls Apple Intelligence, its term for Apple's own new generative AI software.

Apple Intelligence will enable transcription for phone calls, AI photo retouching and improvements in the natural conversation flow with Siri, the company said. The software can also be used to summarize notifications and text messages, as well as articles, documents and open web pages.

Federighi placed an emphasis on privacy, with a new system called Private Cloud Compute that he said will ensure data security for users.

Apple says the new features will be released later this year.

Link:
Apple will bring AI to devices and Siri in much anticipated OpenAI partnership - NPR

Governor, lawmakers are already planning big revisions to Colorado’s first-in-the nation artificial intelligence law – The Colorado Sun

Four weeks after a contentious Colorado bill regulating artificial intelligence systems to prevent harm to consumers was signed into law, the governor, attorney general and lawmakers are already vowing to revise the statute at the request of business leaders.

Discussions about changing the law began earlier this month after state officials heard an outcry from about 200 prominent technology company executives and venture capitalists about Senate Bill 205.

The plan to take another look at the law isnt entirely surprising. Polis had reservations about the bill, but signed it anyway because he said there was time to change it before it went into effect in 2026.

Im certainly encouraged by the fact that the beginning date for provisions are in 2026, Gov. Jared Polis told reporters after the legislative session ended May 8. I am confident that will leave ample time for any improvements that need to be made prior to it becoming effective.

Changes to the law cant be made by the legislature until the General Assembly reconvenes in January for the 2025 lawmaking term unless the governor or legislature calls a special session, which is highly unlikely.

In a letter Thursday to innovators, consumers, and all those interested in the AI space, Polis, Attorney General Phil Weiser and Senate Majority Leader Robert Rodriguez, D-Denver, acknowledged that the recently passed legislation needed additional clarity and improvements. Rodriguez was one of the main sponsors of the bill.

Starting today, in the lead up to the 2025 legislative session and well before the February 2026 deadline for implementation of the law, at the governor and legislative leaderships direction, state and legislative leaders will engage in a process to revise the new law, and minimize unintended consequences associated with its implementation, the letter says.

Denver Mayor Mike Johnston added his signature after this story first published.

The letter goes on to spell out what parts the law must be addressed, including defining what high-risk systems are and focusing regulation on developers of those high-risk systems and not the smaller companies that use third-party AI software. (If a company were using something like ChatGPT and its developer OpenAI made changes, there is confusion about whether Colorado law would require the local company to reassess its compliance.)

Other improvements the governor, attorney general and lawmakers are promising to make include requiring enforcement by the attorney general to happen after the fact instead of proactive disclosure, and clarifying that consumers have a right to appeal only to the attorney general, though they would also bring up any discrimination matters to the Colorado Civil Rights Commission.

Tech leaders complained about the prohibitive language of the new law and how it was already putting a black eye on Colorado for companies looking to expand in the state.

The letter from Polis, the attorney general and Rodriguez addressed that reality, saying that since the law was signed many of our home-grown businesses have highlighted the risk that an overly broad definition of AI, coupled with proactive disclosure requirements, could inadvertently impose prohibitively high costs on them, resulting in barriers to growth and product development, job losses and a diminished capacity to raise capital.

Dan Caruso, head of Caruso Ventures and founding CEO of telecom Zayo Group, was one of 200 names on the letter to Polis from tech industry leaders.

Caruso said he learned about the AI bill after it became law and only because he immediately heard from investors and tech companies confused about the ramifications to their businesses.

The way the law is written, he said, a grocery store that uses AI at the cash register to scan and add up merchandise could be subject to new reporting requirements even if they had nothing to do with the AI inside.

But his other problem is that tech startups dabbling in AI may feel theres an added administrative burden to developing technology in Colorado.

We certainly agree with the intent of trying to protect the consumer, but in the process you cut off a bunch of investment into Colorado and youre going to be hurting all the consumers in Colorado because we need tech jobs. We need our innovation economy. Thats what makes us thrive, Caruso said. By rushing ahead on the AI bill without fully understanding the implications, we kind of put a lot of the innovation economy into jeopardy. So we needed to work with them to correct the broadness of certain provisions of the bill.

Caruso said he and other tech leaders hope to participate in the process to revise the bill to prepare an amended version for the next legislative session.

That letter is the first step of the process. Not the last step. We still have to get to the step where changes are made early next year, Caruso said. But we need to reassure investors that Colorados still is a great place to invest for innovation.

Other notable names on the industry letter included Bryan Leach, CEO of the consumer app developer Ibotta; forme DaVita CEO Kent Thiry; Brad Feld, a venture capitalist at The Foundry Group; and David Cohen, who cofounded and is CEO of Techstars, which he started with Feld and Polis.

Rodriguez, who didnt respond Friday to a request for comment, said during a legislative hearing on the measure that all that were asking for companies to do (is put) in a place a notice to consumers, (perform) risk assessments on their tools and have an accountability report when something goes wrong that results in discrimination, thats what this bill does.

But AI developers opposed the bill from the start because there were concerns that even small changes at the development stage would discourage innovation by startups and AI-adjacent companies. Consumer advocates, however, felt the bill did not go far enough because AI-based discrimination was already occurring, with cases involving background checks and resume screening, and adjustments to auto insurance premiums.

The new letter from Polis and other elected officials was disappointing, said Matt Scherer, senior policy counsel for the Center for Democracy and Technology, a nonprofit that advances civil rights and liberties, said in an email on Friday. He said these changes were proposed before a taskforce that includes labor and consumer group representation has met.

Labor and consumer groups will strongly oppose those changes, Scherer said. The changes they are proposing would completely neuter the law, which is, of course, the objective of tech industry and other business pressure groups who have been spreading misinformation and fear-mongering about this bill ever since the sponsors made a few modest changes to strengthen what was a largely industry-crafted bill.

Eric Maruyama, a spokesman for the governors office said in an email that Gov. Polis is proud that Colorado is leading the way in the innovative sectors of tomorrow.

The governor is grateful for and shares Sen. Rodriguezs commitment to ensuring that Coloradans are protected from bias and discrimination in AI and is focused on ensuring that state standards support consumers and Colorados innovation economy, Maruyama said. Gov. Polis looks forward to working with leaders and stakeholders to help grow Colorados AI sector.

Colorado Sun staff writer Jesse Paul contributed to this report.

This story has been updated to add additional comments.

Based on facts, either observed and verified directly by the reporter, or reported and verified from knowledgeable sources.

Read the original post:
Governor, lawmakers are already planning big revisions to Colorado's first-in-the nation artificial intelligence law - The Colorado Sun

The effects of artificial intelligence on the future of humanity (by Pope Francis) – ZENIT

(ZENIT News / Vatican City-Apulia, Italy, 06.14.2024).- For the first time in history, a Pope participated in a G7 summit, a meeting attended by the leaders of the seven most industrialized economies in the world, along with some guests invited by the current presiding president. At the invitation of President Giorgia Meloni, Pope Francis attended the meeting. Below, we offer the full text of the Popes speech. Pope Francis read a shorter version of this same speech earlier in the afternoon on Friday, June 14.

***

Esteemed ladies and gentlemen,

I address you today, the leaders of the Intergovernmental Forum of the G7, concerning the effects of artificial intelligence on the future of humanity.

Sacred Scripture attests that God bestowed his Spirit upon human beings so that they might have skill and understanding and knowledge in every craft (Ex35:31).[1]Science and technology are therefore brilliant products of the creative potential of human beings.[2]

Indeed,artificial intelligence arises precisely from the use of this God-given creative potential.

As we know, artificial intelligence is an extremely powerful tool, employed in many kinds of human activity: from medicine to the world of work; from culture to the field of communications; from education to politics.It is now safe to assume that its use will increasingly influence the way we live, our social relationships and even the way we conceive of our identity as human beings.[3]

The question of artificial intelligence, however, is often perceived as ambiguous: on the one hand, it generates excitement for the possibilities it offers, while on the other it gives rise to fear for the consequences it foreshadows. In this regard,we could say that all of us, albeit to varying degrees, experience two emotions: we are enthusiastic when we imagine the advances that can result from artificial intelligence but, at the same time, we are fearful when we acknowledge the dangers inherent in its use.[4]

After all, we cannot doubt thatthe advent of artificial intelligence represents a true cognitive-industrial revolution, which will contribute to the creation of a new social system characterised by complex epochal transformations.For example, artificial intelligence could enable a democratization of access to knowledge, the exponential advancement of scientific research and the possibility of giving demanding and arduous work to machines. Yet at the same time, it could bring with it a greater injustice between advanced and developing nations or between dominant and oppressed social classes, raising the dangerous possibility that a throwaway culture be preferred to a culture of encounter.

The significance of these complex transformations is clearly linked to the rapid technological development of artificial intelligence itself.

It is precisely this powerful technological progress that makes artificial intelligence at the same timean exciting and fearsome tool,and demands a reflection that is up to the challenge it presents.

In this regard,perhaps we could start from the observation that artificial intelligence is above all elsea tool. And it goes without saying that the benefits or harm it will bring will depend on its use.

This is surely the case, for it has been this way with every tool fashioned by human beings since the dawn of time.

Our ability to fashion tools, in a quantity and complexity that is unparalleled among living things, speaks of atechno-human condition: human beings have always maintained a relationship with the environment mediated by the tools they gradually produced. It is not possible to separate the history of men and women and of civilization from the history of these tools. Some have wanted to read into this a kind of shortcoming, a deficit, within human beings, as if, because of this deficiency, they were forced to create technology.[5]A careful and objective view actually shows us the opposite.We experience a state of outwardness with respect to our biological being: we are beings inclined toward what lies outside-of-us, indeed we are radically open to the beyond. Our openness to others and to God originates from this reality, as does the creative potential of our intelligence with regard to culture and beauty. Ultimately, our technical capacity also stems from this fact. Technology, then, is a sign of our orientation towards the future.

The use of our tools, however, is not always directed solely to the good. Even if human beings feel within themselves a call to the beyond, and to knowledge as an instrument of good for the service of our brothers and sisters and ourcommon home(cf.Gaudium et Spes, 16), this does not always happen.Due to its radical freedom, humanity has not infrequently corrupted the purposes of its being, turning into an enemy of itself and of the planet.[6]The same fate may befall technological tools. Only if their true purpose of serving humanity is ensured, will such tools reveal not only the unique grandeur and dignity of men and women, but also the command they have received to till and keep(cf.Gen2:15) the planet and all its inhabitants. To speak of technology is to speak of what it means to be human and thus of our singular status as beings who possess both freedom and responsibility. This means speaking about ethics.

In fact, when our ancestors sharpened flint stones to make knives, they used them both to cut hides for clothing and to kill each other. The same could be said of other more advanced technologies, such as the energy produced by the fusion of atoms, as occurs within the Sun, which could be used to produce clean, renewable energy or to reduce our planet to a pile of ashes.

Artificial intelligence, however, is a still more complex tool. I would almost say that we are dealing with a toolsui generis. While the use of a simple tool (like a knife) is under the control of the person who uses it and its use for the good depends only on that person, artificial intelligence, on the other hand, can autonomously adapt to the task assigned to it and, if designed this way, can make choices independent of the person in order to achieve the intended goal.[7]

It should always be remembered thata machine can, in some ways and by these new methods, produce algorithmic choices. The machine makes a technical choice among several possibilities based either on well-defined criteria or on statistical inferences.

Human beings, however, not only choose, but in their hearts are capable of deciding. A decision is what we might call a more strategic element of a choice and demands a practical evaluation. At times, frequently amid the difficult task of governing, we are called upon to make decisions that have consequences for many people. In this regard, human reflection has always spoken of wisdom, thephronesisof Greek philosophy and, at least in part, the wisdom of Sacred Scripture.Faced with the marvels of machines, which seem to know how to choose independently, we should be very clear that decision-making, even when we are confronted with its sometimes dramatic and urgent aspects, must always be left to the human person. We would condemn humanity to a future without hope if we took away peoples ability to make decisions about themselves and their lives, by dooming them to depend on the choices of machines. We need to ensure and safeguard a space for proper human control over the choices made by artificial intelligence programs: human dignity itself depends on it.

Precisely in this regard, allow me to insist:in light of the tragedy that is armed conflict, it is urgent to reconsider the development and use of devices like the so-called lethal autonomous weapons and ultimately ban their use. This starts from an effective and concrete commitment to introduce ever greater and proper human control. No machine should ever choose to take the life of a human being.

It must be added, moreover, that the good use, at least of advanced forms of artificial intelligence, will not be fully under the control of either the users or the programmers who defined their original purposes at the time they were designed. This is all the more true because it is highly likely that, in the not-too-distant future, artificial intelligence programs will be able to communicate directly with each other to improve their performance. And if, in the past, men and women who fashioned simple tools saw their lives shaped by them the knife enabled them to survive the cold but also to develop the art of warfare now that human beings have fashioned complex tools they will see their lives shaped by them all the more.[8]

The basic mechanism of artificial intelligence

I would like now briefly to address the complexity of artificial intelligence.Essentially, artificial intelligence is a tool designed for problem solving. It works by means of a logical chaining of algebraic operations, carried out on categories of data. These are then compared in order to discover correlations, thereby improving their statistical value. This takes place thanks to a process of self-learning, based on the search for further data and the self-modification of its calculation processes.

Artificial intelligence is designed in this way in order to solve specific problems. Yet, for those who use it, there is often an irresistible temptation to draw general, or even anthropological, deductions from the specific solutions it offers.

An important example of this is the use of programs designed to help judges in deciding whether to grant home-confinement to inmates serving a prison sentence. In this case, artificial intelligence is asked to predict the likelihood of a prisoner committing the same crime(s) again. It does so based on predetermined categories (type of offence, behaviour in prison, psychological assessment, and others), thus allowing artificial intelligence to have access to categories of data relating to the prisoners private life (ethnic origin, educational attainment, credit rating, and others). The use of such a methodology which sometimes risksde factodelegating to a machine the last word concerning a persons future may implicitly incorporate prejudices inherent in the categories of data used by artificial intelligence.

Being classified as part of a certain ethnic group, or simply having committed a minor offence years earlier (for example, not having paid a parking fine) will actually influence the decision as to whether or not to grant home-confinement. In reality, however, human beings are always developing, and are capable of surprising us by their actions. This is something that a machine cannot take into account.

It should also be noted that the use of applications similar to the one I have just mentioned will be used ever more frequently due to the fact that artificial intelligence programs will be increasingly equipped with the capacity to interact directly (chatbots) with human beings, holding conversations and establishing close relationships with them. These interactions may end up being, more often than not, pleasant and reassuring, since these artificial intelligence programs will be designed to learn to respond, in a personalised way, to the physical and psychological needs of human beings.

It is a frequent and serious mistake to forget that artificial intelligence is not another human being, and that it cannot propose general principles. This error stems either from the profound need of human beings to find a stable form of companionship, or from a subconscious assumption, namely the assumption that observations obtained by means of a calculating mechanism are endowed with the qualities of unquestionable certainty and unquestionable universality.

This assumption, however, is far-fetched, as can be seen by an examination of the inherent limitations of computation itself. Artificial intelligence uses algebraic operations that are carried out in a logical sequence (for example, if the value of X is greater than that of Y, multiply X by Y; otherwise divide X by Y). This method of calculation the so-called algorithm is neither objective nor neutral.[9]Moreover, since it is based on algebra, it can only examine realities formalised in numerical terms.[10]

Nor should it be forgotten thatalgorithms designed to solve highly complex problems are so sophisticated that it is difficult for programmers themselves to understand exactly how they arrive at their results. This tendency towards sophistication is likely to accelerate considerably with the introduction of quantum computers that will operate not with binary circuits (semiconductors or microchips) but according to the highly complex laws of quantum physics. Indeed, the continuous introduction of increasingly high-performance microchips has already become one of the reasons for the dominant use of artificial intelligence by those few nations equipped in this regard.

Whether sophisticated or not, the quality of the answers that artificial intelligence programs provide ultimately depends on the data they use and how they are structured.

Finally, I would like to indicate one last area in which the complexity of the mechanism of so-called Generative Artificial Intelligence clearly emerges.Today, no one doubts that there are magnificent tools available for accessing knowledge, which even allow for self-learning and self-tutoring in a myriad of fields. Many of us have been impressed by the easily available online applications for composing a text or producing an image on any theme or subject. Students are especially attracted to this, but make disproportionate use of it when they have to prepare papers.

Students are often much better prepared for, and more familiar with, using artificial intelligence than their teachers. Yet they forget that, strictly speaking, so-called generative artificial intelligence is not really generative.Instead, it searches big data for information and puts it together in the style required of it. It does not develop new analyses or concepts, but repeats those that it finds, giving them an appealing form. Then, the more it finds a repeated notion or hypothesis, the more it considers it legitimate and valid.Rather than being generative, then, it is instead reinforcing in the sense that it rearranges existing content, helping to consolidate it, often without checking whether it contains errors or preconceptions.

In this way, it not only runs the risk of legitimising fake news and strengthening a dominant cultures advantage, but, in short, it also undermines the educational process itself. Education should provide students with the possibility of authentic reflection, yet it runs the risk of being reduced to a repetition of notions, which will increasingly be evaluated as unobjectionable, simply because of their constant repetition.[11]

Putting the dignity of the human person back at the centre, in light of a shared ethical proposal

A more general observation should now be added to what we have already said.The season of technological innovation in which we are currently living is accompanied by a particular and unprecedented social situation in which it is increasingly difficult to find agreement on the major issues concerning social life. Even in communities characterised by a certain cultural continuity, heated debates and arguments often arise, making it difficult to produce shared reflections and political solutions aimed at seeking what is good and just.

Thus aside from the complexity of legitimate points of view found within the human family, there is also a factoremerging that seems to characterise the above-mentioned social situation, namely, a loss, or at least an eclipse, of the sense of what is human and an apparent reduction in the significance of the concept of human dignity.[12]Indeed, we seem to be losing the value and profound meaning of one of the fundamental concepts of the West: that of the human person. Thus, at a time when artificial intelligence programs are examining human beings and their actions, it is precisely theethosconcerning the understanding of the value and dignity of the human person that is most at risk in the implementation and development of these systems. Indeed, we must remember that no innovation is neutral. Technology is born for a purpose and, in its impact on human society, always represents a form of order in social relations and an arrangement of power, thus enabling certain people to perform specific actions while preventing others from performing different ones. In a more or less explicit way, this constitutive power dimension of technology always includes the worldview of those who invented and developed it.

This likewise applies to artificial intelligence programs. In order for them to be instruments for building up the good and a better tomorrow, they must always be aimed at the good of every human being. They must have an ethical inspiration.

Moreover, an ethical decision is one that takes into account not only an actions outcomes but also the values at stake and the duties that derive from those values. That is why I welcomed both the 2020 signing in Rome of theRome Call for AI Ethics,[13]and its support for that type of ethical moderation of algorithms and artificial intelligence programs that I call algor-ethics.[14]In a pluralistic and global context, where we see different sensitivities and multiple hierarchies in the scales of values, it might seem difficult to find a single hierarchy of values. Yet, in ethical analysis, we can also make use of other types of tools: if we struggle to define a single set of global values, we can, however, find shared principles with which to address and resolve dilemmas or conflicts regarding how to live.

This is why theRome Callwas born:with the term algor-ethics, a series of principles are condensed into a global and pluralistic platform that is capable of finding support from cultures, religions, international organizations and major corporations, which are key players in this development.

The politics that is needed

We cannot, therefore, conceal the concrete risk, inherent in its fundamental design, that artificial intelligence might limit our worldview to realities expressible in numbers and enclosed in predetermined categories, thereby excluding the contribution of other forms of truth and imposing uniform anthropological, socio-economic and cultural models. The technological paradigm embodied in artificial intelligence runs the risk, then, of becoming a far more dangerous paradigm, which I have already identified as the technocratic paradigm.[15]We cannot allow a tool as powerful and indispensable as artificial intelligence to reinforce such a paradigm, but rather, we must make artificial intelligence a bulwark against its expansion.

This is precisely where political action is urgently needed. The EncyclicalFratelli Tuttireminds us that for many people today, politics is a distasteful word, often due to the mistakes, corruption and inefficiency of some politicians. There are also attempts to discredit politics, to replace it with economics or to twist it to one ideology or another. Yet can our world function without politics?Can there be an effective process of growth towards universal fraternity and social peace without a sound political life?.[16]

Our answer to these questions is:No! Politics is necessary! I want to reiterate in this moment that in the face of many petty forms of politics focused on immediate interests [] true statecraft is manifest when, in difficult times, we uphold high principles and think of the long-term common good. Political powers do not find it easy to assume this duty in the work of nation-building (Laudato Si, 178), much less in forging a common project for the human family, now and in the future.[17]

Esteemed ladies and gentlemen!

My reflection on the effects of artificial intelligence on humanity leads us to consider the importance of healthy politics so that we can look to our future with hope and confidence. I have written previously that global society is suffering from grave structural deficiencies that cannot be resolved by piecemeal solutions or quick fixes. Much needs to change, through fundamental reform and major renewal. Only a healthy politics, involving the most diverse sectors and skills, is capable of overseeing this process. An economy that is an integral part of a political, social, cultural and popular programme directed to the common good could pave the way for different possibilities which do not involve stifling human creativity and its ideals of progress, but rather directing that energy along new channels(Laudato Si, 191).[18]

This is precisely the situation with artificial intelligence.It is up to everyone to make good use of it but the onus is on politics to create the conditions for such good use to be possible and fruitful.

Thank you.

Notes:

___________________________________________________

Thank you for reading our content. If you would like to receive ZENITs daily e-mail news, you can subscribe for free throughthis link.

See the original post here:
The effects of artificial intelligence on the future of humanity (by Pope Francis) - ZENIT