Archive for the ‘Artificial Intelligence’ Category

In Pics | Pope Francis gets an Artificial Intelligence-makeover … – WION

Written By: Mukul Sharma|Updated: Mar 28, 2023, 07:40 PM IST

Artificial Intelligence is the fad of these times, with latest depictions throwing up real side of previously unimaginable realities. Whether it's former US President Barack Obama enjoying a gala time at a beach with former German Chancellor Angela Merkel,or Donald Trump's 'artificial' arrest. The pictures of these imagined realities led by the 'Artificial' side of technological intelligence continues to stoke curious minds of the world.

Lately, Pope Francis, the head of Roman Catholic Church, became the centerof a few Artificial Intelligence showpieces, as they imagined him sporting some luxurious fashion.

Pope Francis is the Head of the Catholic Church, the bishop of Rome and sovereign of the Vatican City State. In this picture, Pope Francis holds a child at the end of an audience with people of Rho, near Milan, at the Paul VI hall in the Vatican on March 25, 2023. For many who are used to seeing Pope pursuing Catholic religious services, the AI-avatar shown further was nothing short of a shock.

'The Pope drip' becomes an AI-trend

Pope's AI virality reflects how AI-images will be believed as true in coming times

People believed Pope's AI images were real indeed

Read more from the original source:
In Pics | Pope Francis gets an Artificial Intelligence-makeover ... - WION

Artificial intelligence and Baptist associations – The Baptist Paper

Artificial intelligence applications such as ChatGPT are growing rapidly. The creators themselves are surprised by the fast-growing interest in this resource. Part of their surprise is that the accuracy rate is still not what they believe it will be soon.

I use ChatGPT as a research and background tool. I asked what it knew about Baptist associations. On Feb. 26, 2023, I asked the following question: What is the importance of Baptist associations to the Southern Baptist Convention?

Here is what ChatGPT said:

Baptist associations are important to the Southern Baptist Convention (SBC) for several reasons.

First, they provide a way for local churches to connect with each other and work together in missions and ministry. Associations typically consist of a group of churches in a geographic area that share common beliefs and practices. By joining together in an association, churches can pool their resources and collaborate on projects that would be difficult to undertake alone.

Second, associations serve as a conduit for communication and cooperation between local churches and the larger SBC. Associations are usually affiliated with a state convention, which in turn is affiliated with the SBC. Through this structure, churches can participate in the work of the convention and have a voice in its decision-making processes.

Third, associations provide a means for training and support for pastors and other church leaders. Many associations offer workshops, conferences and other resources to help pastors and leaders develop their skills and deepen their understanding of Baptist theology and practice.

Overall, Baptist associations play a vital role in the life of the Southern Baptist Convention, connecting local churches, facilitating cooperation and communication and providing support and training for church leaders.

I posted the response to social media groups for the network of Baptist associations and elsewhere. I asked if ChatGPT got it right. The response was positive some commented it was almost scary.

Was it complete and accurate? No, but it would point an inquirer in a generally correct direction.

This means the internet contains enough information that an initial understanding can be acquired about Baptist associations in the Southern Baptist tradition.

Use of the phrase in the Southern Baptist tradition is important because many of the three dozen Baptist denominations in North America have associations as part of their denominational structure.

First, artificial intelligence apps such as ChatGPT are helpful in summarizing information about Baptist associations in the Southern Baptist tradition. But they should not be accepted as authoritative. Baptist associations as a family are about a quality of relationships ChatGPT does not know.

Second, AI answers are incomplete.

At times they miss qualitative understandings. Such understandings require subjective experiences. The contextual and doctrinal understandings of associations differ widely throughout North America.

Third, AI may provide characteristics the inquirer has not thought about. This can be informative or at least cause a reader to ponder if they need to think more deeply about associations.

Fourth, the answers to specific questions we ask ChatGPT can change with each new update of the apps content database.

Fifth, the answer provided by an AI app should never be written or spoken without referring to the source. Attribution is imperative. I placed in quotes the answer AI gave me. I also dated when the information was accessed and indicated the specific app I used.

For churches in association and for individual churches and their leaders, artificial intelligence apps can be a great source of initial research on any topic relevant to church and denominational ministry.

On some religious, spiritual and theological questions, the app includes in its answer a cautionary statement that it is not an expert on the question asked or the subject of the inquiry. This is ethical and responsible.

One phrase I see in response to questions posed to ChatGPT is, As a language model AI, I dont have personal opinions or beliefs.

In our speaking and writing, lets all be this ethical and responsible.

Use ChatGPT and other similar apps for research and background information and to stimulate your thinking but always remember that your knowledge of our Kingdom relationships is much richer.

See the original post:
Artificial intelligence and Baptist associations - The Baptist Paper

Artificial Intelligence automation may impact two-thirds of jobs, says Goldman Sachs – CNBCTV18

At the speed, at which the advancement of artificial intelligence (AI) is being witnessed, it has the potential to significantly disrupt labour markets globally. And this is certified in the research done by Goldman Sachs.

As per that research, roughly two-thirds of current jobs are exposed to some degree of AI automation in the US and the European Union.

Administrative and legal are those sectors which can see the maximum impact. Goldman Sachs says 46 percent of administrative jobs and 44 percent of legal jobs can be substituted by AI. The ones with low exposures are physically-intensive professions such as construction at six percent and maintenance at four percent.

While AI and automation can augment the productivity of some workers, they can replace the work done by others and will likely transform almost all occupations at least to some degree. Rising automation is happening in a period of growing economic inequality, raising fears of mass technological unemployment and a renewed call for policy efforts to address the consequences of technological change. But at the same time, its being seen as one of the tools to enhance economic growth.

As per Goldman Sachs research, AI could eventually increase annual global GDP by seven percent over a 10-year period. A combination of significant labour cost savings, new job creation and higher productivity for non-displaced workers raises are seen as the areas that will boost the growth.

For the US, generative AI is seen raising annual US labour productivity growth by just under 1.5 percentage points over a 10-year period.

McKinsey Research too has done a survey of more than 2,000 work activities across more than 800 occupations. It shows that certain categories of activities are more easily automatable than others. They include physical activities in highly predictable and structured environments, as well as data collection and data processing.

These account for roughly half of the activities that people do across all sectors. And, it believes, nearly all occupations will be affected by automation, but only about five percent of occupations could be fully automated by currently demonstrated technologies.

Although the size of AIs impact will ultimately depend on its capability and adoption timeline, both remain uncertain at this point.

Read more:
Artificial Intelligence automation may impact two-thirds of jobs, says Goldman Sachs - CNBCTV18

Robot recruiters: can bias be banished from AI hiring? – The Guardian

A third of Australian companies rely on artificial intelligence to help them hire the right person. But studies show its not always a benign intermediary

Sun 26 Mar 2023 10.00 EDT

Michael Scott, the protagonist from the US version of The Office, is using an AI recruiter to hire a receptionist.

Guardian Australia applies.

The text-based system asks applicants five questions that delve into how they responded to past work situations, including dealing with difficult colleagues and juggling competing work demands.

Potential employees type their answers into a chat-style program that resembles a responsive help desk. The real and unnerving power of AI then kicks in, sending a score and traits profile to the employer, and a personality report to the applicant. (More on our results later.)

This demonstration, by the Melbourne-based startup Sapia.ai, resembles the initial structured interview process used by their clients, who include some of Australias biggest companies such as Qantas, Medibank, Suncorp and Woolworths.

The process would typically create a shortlist an employer can follow up on, with insights on personality markers including humility, extraversion and conscientiousness.

For customer service roles, it is designed to help an employer know whether someone is amiable. For a manual role, an employer might want to know whether an applicant will turn up on time.

You basically interview the world; everybody gets an interview, says Sapias founder and chief executive, Barb Hyman.

The selling points of AI hiring are clear: it can automate costly and time-consuming processes for businesses and government agencies, especially in large recruitment drives for non-managerial roles.

Sapias biggest claim, however, might be that it is the only way to give someone a fair interview.

The only way to remove bias in hiring is to not use people right at the first gate, Hyman says. Thats where our technology comes in: its blind; its untimed, it doesnt use rsum data or your social media data or demographic data. All it is using is the text results.

Sapia is not the only AI company claiming its technology will reduce bias in the hiring process. A host of companies around Australia are offering AI-augmented recruitment tools, including not just chat-based models but also one-way video interviews, automated reference checks, social media analysers and more.

In 2022 a survey of Australian public sector agencies found at least a quarter had used AI-assisted tech in recruitment that year. Separate research from the Diversity Council of Australia and Monash University suggests that a third of Australian organisations are using it at some point in the hiring process.

Applicants, though, are often not aware that they will be subjected to an automated process, or on what basis they will be assessed within that.

The office of the Merit Protection Commissioner advises public service agencies that when they use AI tools for recruitment, there should be a clear demonstrated connection between the candidates qualities being assessed and the qualities required to perform the duties of the job.

The commissioners office also cautions that AI may assess candidates on something other than merit, raise ethical and legal concerns about transparency and data bias, produce biased results or cause statistical bias by erroneously interpreting socioeconomic markers as indicative of success.

Theres good reason for that warning. AIs track record on bias has been worrying.

In 2017 Amazon quietly scrapped an experimental candidate-ranking tool that had been trained on CVs from the mostly male tech industry, effectively teaching itself that male candidates were preferable. The tool systematically downgraded womens CVs, penalising those that included phrases such as womens chess club captain, and elevating those that used verbs more commonly found on male engineers CVs, such as executed and captured.

Research out of the US in 2020 demonstrated that facial-analysis technology created by Microsoft and IBM, among others, performed better on lighter-skinned subjects and men, with darker-skinned females most often misgendered by the programs.

Last year a study out of Cambridge University showed that AI is not a benign intermediary but that by constructing associations between words and peoples bodies it helps to produce the ideal candidate rather than merely observing or identifying it.

Natalie Sheard, a lawyer and PhD candidate at La Trobe University whose doctorate examines the regulation of and discrimination in AI-based hiring systems, says this lack of transparency is a huge problem for equity.

Messenger-style apps are based on natural language processing, similar to ChatGPT, so the training data for those systems tends to be the words or vocal sounds of people who speak standard English, Sheard says.

So if youre a non-native speaker, how does it deal with you? It might say you dont have good communication skills if you dont use standard English grammar, or you might have different cultural traits that the system might not recognise because it was trained on native speakers.

Another concern is how physical disability is accounted for in something like a chat or video interview. And with the lack of transparency around whether assessments are being made with AI and on what basis, its often impossible for candidates to know that they may need reasonable adjustments to which they are legally entitled.

There are legal requirements for organisations to adjust for disability in the hiring process, Sheard says. But that requires people to disclose their disability straight up when they have no trust with this employer. And these systems change traditional recruitment practices, so you dont know what the assessment is all about, you dont know an algorithm is going to assess you or how. You might not know that you need a reasonable adjustment.

Australia has no laws specifically governing AI recruitment tools. While the department of industry has developed an AI ethics framework, which includes principles of transparency, explainability, accountability and privacy, the code is voluntary.

There are low levels of understanding in the community about AI systems, and because employers are very reliant on these vendors, they deploy [the tools] without any governance systems, Sheard says.

Employers dont have any bad intent, they want to do the right things but they have no idea what they should be doing. There are no internal oversight mechanisms set up, no independent auditing systems to ensure there is no bias.

Hyman says client feedback and independent research shows that the broader community is comfortable with recruiters using AI.

They need to have an experience that is inviting, inclusive and attracts more diversity, Hyman says. She says Sapias untimed, low-stress, text-based system fits this criteria.

You are twice as likely to get women and keep women in the hiring process when youre using AI. Its a complete fiction that people dont want it and dont trust it. We see the complete opposite in our data.

Research from the Diversity Council of Australia and Monash University is not quite so enthusiastic, showing there is a clear divide between employers and candidates who were converted or cautious about AI recruitment tools, with 50% of employers converted to the technology but only a third of job applicants. First Nations job applicants were among those most likely to be worried.

DCA recommends recruiters be transparent about the due diligence protocols they have in place to ensure AI-supported recruitment tools are bias-free, inclusive and accessible.

In the Sapia demonstration, the AI quickly generates brief notes of personality feedback at the end of the application for the interviewee.

This is based on how someone rates on various markers, including conscientiousness and agreeableness, which the AI matches with pre-written phrases that resemble something a life coach might say.

A more thorough assessment not visible to the applicant would be sent to the recruiter.

Sapia says its chat-interview software analysed language proficiency, with a profanity detector included too, with the company saying these were important considerations for customer-facing roles.

Hyman says the language analysis is based on the billion words of data collected from responses in the years since the tech company was founded in 2013. The data itself is proprietary.

So, could Guardian Australian work for Michael Scott at the fictional paper company Dunder Mifflin?

You are self-assured but not overly confident, the personality feedback says in response to Guardian Australias application in the AI demonstration.

It follows with a subtle suggestion that this applicant might not be a good fit for the receptionist role, which requires repetition, routine and following a defined process.

But it has some helpful advice: Potentially balance that with variety outside of work.

Looks like were not a good fit for this job.

{{topLeft}}

{{bottomLeft}}

{{topRight}}

{{bottomRight}}

{{.}}

Read more:
Robot recruiters: can bias be banished from AI hiring? - The Guardian

The Future is Now: Exploring the Importance of Artificial Intelligence – The Geopolitics

Artificial intelligence (AI) is a rapidly growing field that has captured the attention of scientists, engineers, business leaders, and policymakers worldwide. It refers to the development of computer systems that can perform tasks that typically require human intelligence, such as learning, problem-solving, perception, and decision-making. AI has the potential to transform various industries and sectors, including healthcare, transportation, education, manufacturing, and finance, among others. In this article, we will explore the importance of artificial intelligence in the future and its potential benefits and challenges.

One of the most significant advantages of artificial intelligence is its ability to automate routine and repetitive tasks, allowing humans to focus on more complex and creative work. For instance, AI-powered robots and machines can perform tasks like assembling products, packaging goods, and transporting materials with greater speed, accuracy, and efficiency than humans. This can help businesses increase productivity, reduce costs, and improve the quality of their products and services.

Another important benefit of artificial intelligence is its ability to analyze and interpret vast amounts of data, enabling organizations to gain valuable insights into customer behavior, market trends, and business operations. By using advanced algorithms and machine learning techniques, AI systems can identify patterns, correlations, and anomalies in data that would be challenging for humans to detect. This can help businesses make data-driven decisions, optimize their processes, and improve their overall performance.

Moreover, artificial intelligence has the potential to revolutionize healthcare by improving the accuracy and efficiency of medical diagnoses, treatments, and research. For example, AI systems can analyze medical images, such as X-rays and CT scans, to detect signs of diseases like cancer, heart disease, and Alzheimers, with greater accuracy than human doctors. AI-powered chatbots and virtual assistants can also provide patients with personalized health advice, monitor their symptoms, and remind them to take their medication. Additionally, AI can help accelerate drug discovery and development by predicting the efficacy and safety of new drugs and identifying potential side effects.

In the field of education, artificial intelligence can help personalize learning and improve student outcomes by providing tailored instruction and feedback based on individual needs and preferences. For example, AI systems can analyze students performance data and adjust their learning paths and content accordingly. AI-powered chatbots can also provide students with instant answers to their questions and feedback on their assignments. Moreover, AI can help educators develop more effective teaching strategies by providing insights into student engagement, motivation, and learning preferences.

However, along with its potential benefits, artificial intelligence also poses significant challenges and risks that need to be addressed. One of the main concerns is the potential impact of AI on employment, as automation and AI systems may replace human workers in various industries and occupations. While AI can create new job opportunities in areas like software development, data analysis, and robotics, it may also lead to job losses in other sectors, particularly those that involve routine and repetitive tasks.

Another challenge of artificial intelligence is its potential to perpetuate and amplify social biases and inequalities. AI systems are only as unbiased as the data they are trained on, and if the data contain biased or discriminatory patterns, the AI systems will replicate and reinforce them. This can lead to unfair and discriminatory outcomes in areas like hiring, lending, and law enforcement. Therefore, it is essential to ensure that AI systems are developed and deployed ethically and with diversity and inclusivity in mind.

Moreover, artificial intelligence also raises concerns about privacy, security, and accountability. AI systems often collect and process sensitive personal data, such as medical records, financial information, and social media activity, raising concerns about data breaches, identity theft, and surveillance. Additionally, AI systems may make decisions that have significant consequences for individuals and society, such as determining eligibility for loans or insurance, or recommending criminal sentences. Therefore, it is crucial to ensure that AI systems are transparent, accountable, and subject to ethical and legal oversight.

Artificial intelligence is a powerful and transformative technology that has the potential to bring significant benefits to various industries and sectors. By automating routine tasks, analyzing data, and improving decision-making, AI can help increase productivity, reduce costs, and improve the quality of products and services. In healthcare, education, and other fields, AI can improve outcomes and accelerate progress. However, AI also poses significant challenges and risks that need to be addressed, such as job displacement, bias and discrimination, privacy, security, and accountability. Therefore, it is essential to ensure that AI is developed and deployed ethically, transparently, and with diversity and inclusivity in mind. By doing so, we can harness the power of AI to create a more prosperous, equitable, and sustainable future for all.

[Gerd Altmann / Pixabay]

Carl Taylor a tech author with over 12 years of experience in the industry. He has written numerous articles on topics such as artificial intelligence, machine learning, and data science. The views and opinions expressed in this article are those of the author.

See the article here:
The Future is Now: Exploring the Importance of Artificial Intelligence - The Geopolitics