Archive for the ‘Artificial Intelligence’ Category

ChatGPT in the Humanities Panel: Researchers Share Concerns … – Cornell University The Cornell Daily Sun

Does the next Aristotle, Emily Dickinson or Homer live on your computer? A group of panelists explored this idea in a talk titled Chat GPT and the Humanities on Friday in the A.D. White Houses Guerlac Room.

ChatGPTs ability to produce creative literature was one of the central topics explored in the talk as the discourse on the use of artificial intelligence software in academic spheres continues to grow.

In the panel, Prof. Morten Christiansen, psychology, Prof. Laurent Dubreuil, comparative literature, Pablo Contreras Kallens grad and Jacob Matthews grad explored the benefits and consequences of utilizing artificial intelligence within humanities research and education.

The forum was co-sponsored by the Society for the Humanities, the Humanities Lab and the New Frontier Grant program.

The Society for the Humanities was established in 1966 and connects visiting fellows, Cornell faculty and graduate students to conduct interdisciplinary research connected to an annual theme. This years focal theme is Repair which refers to the conservation, restoration and replication of objects, relations and histories.

All four panelists are members of the Humanities Lab, which works to provide an intellectual space for scholars to pursue research relating to the interaction between the sciences and the humanities. The lab was founded by Dubreuil in 2019 and is currently led by him.

Christiansen and Dubreuil also recently received New Frontier Grants for their project titled Poetry, AI and the Mind: A Humanities-Cognitive Science Transdisciplinary Exploration, which focuses on the application of artificial intelligence to literature, cognitive science and mental and cultural diversity. For well over a year, they have worked on an experiment comparing humans poetry generation to that of ChatGPT, with the continuous help of Contreras Kallens and Matthews.

Before the event began, attendees expressed their curiosity and concerns about novel AI technology.

Lauren Scheuer, a writing specialist at the Keuka College Writing Center and Tompkins County local, described worries about the impact of ChatGPT on higher education.

Im concerned about how ChatGPT is being used to teach and to write and to generate content, Scheuer said.

Sarah Milliron grad, who is pursuing a Ph.D. in psychology, also said that she was concerned about ChatGPTs impact on academia as the technology becomes more widely used.

I suppose Im hoping [to gain] a bit of optimism [from this panel], Milliron said. I hope that they address ways that we can work together with AI as opposed to [having] it be something that we ignore or have it be something that we are trying to get rid of.

Dubreuil first explained that there has been a recent interest in artificial intelligence due to the impressive performance of ChatGPT and its successful marketing campaign.

All scholars, but especially humanities, are currently wondering if we should take into account the new capabilities of automated text generators, Dubreuil said.

Dubreuil expressed that scholars have varying concerns and ideas regarding ChatGPT.

Some [scholars] believe we should counteract [ChatGPTs consequences] by means of new policies, Dubreuil said. Other [scholars] complained about the lack of morality or the lack of political apropos that is exhibited by ChatGPT. Other [scholars] say that there is too much political apropos and political correctness.

Dubreuil noted that other scholars prophesy that AI could lead to the fall of humanity.

For example, historian Yuval Harari recently wrote about the 2022 Expert Survey on Progress in AI, which found that out of more than 700 surveyed top academics and researchers, half said that there was at least a 10 percent chance of human extinction or similarly permanent and severe disempowerment due to future AI systems.

Contreras Kallens then elaborated on their poetry experiment, which utilized what he referred to as fragment completion essentially, ChatGPT and Cornell undergraduates were both prompted to continue writing from two lines of poetry from an author such as Dickinson.

Contreras Kallens described that ChatGPT generally matched the poetry quality of a Cornell undergraduate, while expectedly falling short of the original authors writing. However, the author recognition program they used actually confused the artificial productions with the original authors work.

The final part of the project, which the group is currently refining, will measure whether students can differentiate between whether a fragment was completed by the original author, an undergraduate or by ChatGPT.

When describing the importance of this work, Contreras Kallens explained the concept of universal grammar a linguistics theory that suggests that people are innately biologically programmed to learn grammar. Thus, ChatGPTs being able to reach the writing quality of many humans challenges assumptions about technologys shortcomings.

[This model] invites a deeper reconsideration of language assumptions or language acquisition processing, Contreras Kallens said. And thats at least interesting.

Matthews then expressed that his interest in AI does not lie in its generative abilities but in the possibility of representing text numerically and computationally.

Often humanists are dealing with large volumes of text [and] they might be very different, Matthews said. [It is] fundamental to the humanities that we debate [with each other] about what texts mean, how they relate to one another were always putting different things into relation with one another. And it would be nice sometimes to have a computational or at least quantitative basis that we could maybe talk about, or debate or at least have access to.

Matthews described that autoregressive language models which refer to machine learning models that use past behavior models to predict the following word in a text reveal the perceived similarity between certain words.

Through assessing word similarity, Matthews found that ChatGPT contains gendered language bias, which he said reflects the bias in human communication.

For example, Matthews inputted the names Mary and James the most common female and male names in the United States along with Sam, which was used as a gender-neutral name. He found that James is closer to the occupations of lawyer, programmer and doctor than the other names, particularly Mary.

Matthews explained that these biases were more prevalent in previous language modeling systems, but that the makers of GPT-3.5 the embedding model of ChatGPT, as opposed to GPT-3, which is the model currently available to the public have acknowledged bias in their systems.

Its not just that [these models] learn language theyre also exposed to biases that are present in text, Matthews said. This can be visible in social contexts especially, and if were deploying these models, this has consequences if theyre used in decision making.

Matthews also demonstrated that encoding systems can textually analyze and compare literary works, such as those by Shakespeare and Dickinson, making them a valuable resource for humanists, especially regarding large texts.

Humanists are already engaged in thinking about these types of questions [referring to the models semantics and cultural analyses], Matthews said. But we might not have the capacity or the time to analyze the breadth of text that we want to and we might not be able to assign or even to recall all the things that were reading. So if were using this in parallel with the existing skill sets that humanists have, I think that this is really valuable.

Christiansen, who is part of a new University-wide committee looking into the potential use of generative AI, then talked about the opportunities and challenges of the use of AI in education and teaching.

Christiansen described that one positive pedagogical use of ChatGPT is to have students ask the software specific questions and then for the students to criticize the answers. He also explained that ChatGPT may help with the planning process of writing, which he noted many students frequently discount.

I think also, importantly, that [utilizing ChatGPT in writing exercises] can actually provide a bit of a level playing field for second language learners, of which we have many here at Cornell, Christiansen said.

Christiansen added that ChatGPT can act as a personal tutor, help students develop better audience sensitivity, work as a translator and provide summaries.

However, these models also have several limitations. For instance, ChatGPT knows very little about any events that occurred after September 2021 and will be clueless about recent issues, such as the Ukraine war.

Furthermore, Christiansen emphasized that these models can and will hallucinate which refers to their making up information, including falsifying references. He also noted that students could potentially use ChatGPT to violate academic integrity.

Overall, Dubreuil expressed concern for the impact of technologies such as ChatGPT on innovation. He explained that ChatGPT currently only reorganizes data, which falls short of true invention.

There is a wide range between simply incremental inventions and rearrangements that are such that they not only rearrange the content, but they reconfigure the given and the way the given was produced its meanings, its values and its consequences, Dubreuil said.

Dubreuil argued that if standards for human communication do not require invention, not only will AI produce work that is not truly creative, but humans may become less inventive as well.

It has to be said that through social media, especially through our algorithmic life, these days, we may have prepared our own minds to become much more similar to a chatbot. We may be reprogramming ourselves constantly and thats the danger, Dubreuil said. The challenge of AI is a provocation toward reform.

Correction, March 27, 2:26 p.m.: A previous version of this article incorrectly stated the time frame about which ChatGPT is familiar and the current leaders of the Humanities Lab. In addition, minor clarification has been added to the description of Christiansen and Dubreuils study on AI poetry generation. The Sun regrets these errors, and the article has been corrected.

More:
ChatGPT in the Humanities Panel: Researchers Share Concerns ... - Cornell University The Cornell Daily Sun

In Pics | Pope Francis gets an Artificial Intelligence-makeover … – WION

Written By: Mukul Sharma|Updated: Mar 28, 2023, 07:40 PM IST

Artificial Intelligence is the fad of these times, with latest depictions throwing up real side of previously unimaginable realities. Whether it's former US President Barack Obama enjoying a gala time at a beach with former German Chancellor Angela Merkel,or Donald Trump's 'artificial' arrest. The pictures of these imagined realities led by the 'Artificial' side of technological intelligence continues to stoke curious minds of the world.

Lately, Pope Francis, the head of Roman Catholic Church, became the centerof a few Artificial Intelligence showpieces, as they imagined him sporting some luxurious fashion.

Pope Francis is the Head of the Catholic Church, the bishop of Rome and sovereign of the Vatican City State. In this picture, Pope Francis holds a child at the end of an audience with people of Rho, near Milan, at the Paul VI hall in the Vatican on March 25, 2023. For many who are used to seeing Pope pursuing Catholic religious services, the AI-avatar shown further was nothing short of a shock.

'The Pope drip' becomes an AI-trend

Pope's AI virality reflects how AI-images will be believed as true in coming times

People believed Pope's AI images were real indeed

Read more from the original source:
In Pics | Pope Francis gets an Artificial Intelligence-makeover ... - WION

Artificial intelligence and Baptist associations – The Baptist Paper

Artificial intelligence applications such as ChatGPT are growing rapidly. The creators themselves are surprised by the fast-growing interest in this resource. Part of their surprise is that the accuracy rate is still not what they believe it will be soon.

I use ChatGPT as a research and background tool. I asked what it knew about Baptist associations. On Feb. 26, 2023, I asked the following question: What is the importance of Baptist associations to the Southern Baptist Convention?

Here is what ChatGPT said:

Baptist associations are important to the Southern Baptist Convention (SBC) for several reasons.

First, they provide a way for local churches to connect with each other and work together in missions and ministry. Associations typically consist of a group of churches in a geographic area that share common beliefs and practices. By joining together in an association, churches can pool their resources and collaborate on projects that would be difficult to undertake alone.

Second, associations serve as a conduit for communication and cooperation between local churches and the larger SBC. Associations are usually affiliated with a state convention, which in turn is affiliated with the SBC. Through this structure, churches can participate in the work of the convention and have a voice in its decision-making processes.

Third, associations provide a means for training and support for pastors and other church leaders. Many associations offer workshops, conferences and other resources to help pastors and leaders develop their skills and deepen their understanding of Baptist theology and practice.

Overall, Baptist associations play a vital role in the life of the Southern Baptist Convention, connecting local churches, facilitating cooperation and communication and providing support and training for church leaders.

I posted the response to social media groups for the network of Baptist associations and elsewhere. I asked if ChatGPT got it right. The response was positive some commented it was almost scary.

Was it complete and accurate? No, but it would point an inquirer in a generally correct direction.

This means the internet contains enough information that an initial understanding can be acquired about Baptist associations in the Southern Baptist tradition.

Use of the phrase in the Southern Baptist tradition is important because many of the three dozen Baptist denominations in North America have associations as part of their denominational structure.

First, artificial intelligence apps such as ChatGPT are helpful in summarizing information about Baptist associations in the Southern Baptist tradition. But they should not be accepted as authoritative. Baptist associations as a family are about a quality of relationships ChatGPT does not know.

Second, AI answers are incomplete.

At times they miss qualitative understandings. Such understandings require subjective experiences. The contextual and doctrinal understandings of associations differ widely throughout North America.

Third, AI may provide characteristics the inquirer has not thought about. This can be informative or at least cause a reader to ponder if they need to think more deeply about associations.

Fourth, the answers to specific questions we ask ChatGPT can change with each new update of the apps content database.

Fifth, the answer provided by an AI app should never be written or spoken without referring to the source. Attribution is imperative. I placed in quotes the answer AI gave me. I also dated when the information was accessed and indicated the specific app I used.

For churches in association and for individual churches and their leaders, artificial intelligence apps can be a great source of initial research on any topic relevant to church and denominational ministry.

On some religious, spiritual and theological questions, the app includes in its answer a cautionary statement that it is not an expert on the question asked or the subject of the inquiry. This is ethical and responsible.

One phrase I see in response to questions posed to ChatGPT is, As a language model AI, I dont have personal opinions or beliefs.

In our speaking and writing, lets all be this ethical and responsible.

Use ChatGPT and other similar apps for research and background information and to stimulate your thinking but always remember that your knowledge of our Kingdom relationships is much richer.

See the original post:
Artificial intelligence and Baptist associations - The Baptist Paper

Artificial Intelligence automation may impact two-thirds of jobs, says Goldman Sachs – CNBCTV18

At the speed, at which the advancement of artificial intelligence (AI) is being witnessed, it has the potential to significantly disrupt labour markets globally. And this is certified in the research done by Goldman Sachs.

As per that research, roughly two-thirds of current jobs are exposed to some degree of AI automation in the US and the European Union.

Administrative and legal are those sectors which can see the maximum impact. Goldman Sachs says 46 percent of administrative jobs and 44 percent of legal jobs can be substituted by AI. The ones with low exposures are physically-intensive professions such as construction at six percent and maintenance at four percent.

While AI and automation can augment the productivity of some workers, they can replace the work done by others and will likely transform almost all occupations at least to some degree. Rising automation is happening in a period of growing economic inequality, raising fears of mass technological unemployment and a renewed call for policy efforts to address the consequences of technological change. But at the same time, its being seen as one of the tools to enhance economic growth.

As per Goldman Sachs research, AI could eventually increase annual global GDP by seven percent over a 10-year period. A combination of significant labour cost savings, new job creation and higher productivity for non-displaced workers raises are seen as the areas that will boost the growth.

For the US, generative AI is seen raising annual US labour productivity growth by just under 1.5 percentage points over a 10-year period.

McKinsey Research too has done a survey of more than 2,000 work activities across more than 800 occupations. It shows that certain categories of activities are more easily automatable than others. They include physical activities in highly predictable and structured environments, as well as data collection and data processing.

These account for roughly half of the activities that people do across all sectors. And, it believes, nearly all occupations will be affected by automation, but only about five percent of occupations could be fully automated by currently demonstrated technologies.

Although the size of AIs impact will ultimately depend on its capability and adoption timeline, both remain uncertain at this point.

Read more:
Artificial Intelligence automation may impact two-thirds of jobs, says Goldman Sachs - CNBCTV18

Robot recruiters: can bias be banished from AI hiring? – The Guardian

A third of Australian companies rely on artificial intelligence to help them hire the right person. But studies show its not always a benign intermediary

Sun 26 Mar 2023 10.00 EDT

Michael Scott, the protagonist from the US version of The Office, is using an AI recruiter to hire a receptionist.

Guardian Australia applies.

The text-based system asks applicants five questions that delve into how they responded to past work situations, including dealing with difficult colleagues and juggling competing work demands.

Potential employees type their answers into a chat-style program that resembles a responsive help desk. The real and unnerving power of AI then kicks in, sending a score and traits profile to the employer, and a personality report to the applicant. (More on our results later.)

This demonstration, by the Melbourne-based startup Sapia.ai, resembles the initial structured interview process used by their clients, who include some of Australias biggest companies such as Qantas, Medibank, Suncorp and Woolworths.

The process would typically create a shortlist an employer can follow up on, with insights on personality markers including humility, extraversion and conscientiousness.

For customer service roles, it is designed to help an employer know whether someone is amiable. For a manual role, an employer might want to know whether an applicant will turn up on time.

You basically interview the world; everybody gets an interview, says Sapias founder and chief executive, Barb Hyman.

The selling points of AI hiring are clear: it can automate costly and time-consuming processes for businesses and government agencies, especially in large recruitment drives for non-managerial roles.

Sapias biggest claim, however, might be that it is the only way to give someone a fair interview.

The only way to remove bias in hiring is to not use people right at the first gate, Hyman says. Thats where our technology comes in: its blind; its untimed, it doesnt use rsum data or your social media data or demographic data. All it is using is the text results.

Sapia is not the only AI company claiming its technology will reduce bias in the hiring process. A host of companies around Australia are offering AI-augmented recruitment tools, including not just chat-based models but also one-way video interviews, automated reference checks, social media analysers and more.

In 2022 a survey of Australian public sector agencies found at least a quarter had used AI-assisted tech in recruitment that year. Separate research from the Diversity Council of Australia and Monash University suggests that a third of Australian organisations are using it at some point in the hiring process.

Applicants, though, are often not aware that they will be subjected to an automated process, or on what basis they will be assessed within that.

The office of the Merit Protection Commissioner advises public service agencies that when they use AI tools for recruitment, there should be a clear demonstrated connection between the candidates qualities being assessed and the qualities required to perform the duties of the job.

The commissioners office also cautions that AI may assess candidates on something other than merit, raise ethical and legal concerns about transparency and data bias, produce biased results or cause statistical bias by erroneously interpreting socioeconomic markers as indicative of success.

Theres good reason for that warning. AIs track record on bias has been worrying.

In 2017 Amazon quietly scrapped an experimental candidate-ranking tool that had been trained on CVs from the mostly male tech industry, effectively teaching itself that male candidates were preferable. The tool systematically downgraded womens CVs, penalising those that included phrases such as womens chess club captain, and elevating those that used verbs more commonly found on male engineers CVs, such as executed and captured.

Research out of the US in 2020 demonstrated that facial-analysis technology created by Microsoft and IBM, among others, performed better on lighter-skinned subjects and men, with darker-skinned females most often misgendered by the programs.

Last year a study out of Cambridge University showed that AI is not a benign intermediary but that by constructing associations between words and peoples bodies it helps to produce the ideal candidate rather than merely observing or identifying it.

Natalie Sheard, a lawyer and PhD candidate at La Trobe University whose doctorate examines the regulation of and discrimination in AI-based hiring systems, says this lack of transparency is a huge problem for equity.

Messenger-style apps are based on natural language processing, similar to ChatGPT, so the training data for those systems tends to be the words or vocal sounds of people who speak standard English, Sheard says.

So if youre a non-native speaker, how does it deal with you? It might say you dont have good communication skills if you dont use standard English grammar, or you might have different cultural traits that the system might not recognise because it was trained on native speakers.

Another concern is how physical disability is accounted for in something like a chat or video interview. And with the lack of transparency around whether assessments are being made with AI and on what basis, its often impossible for candidates to know that they may need reasonable adjustments to which they are legally entitled.

There are legal requirements for organisations to adjust for disability in the hiring process, Sheard says. But that requires people to disclose their disability straight up when they have no trust with this employer. And these systems change traditional recruitment practices, so you dont know what the assessment is all about, you dont know an algorithm is going to assess you or how. You might not know that you need a reasonable adjustment.

Australia has no laws specifically governing AI recruitment tools. While the department of industry has developed an AI ethics framework, which includes principles of transparency, explainability, accountability and privacy, the code is voluntary.

There are low levels of understanding in the community about AI systems, and because employers are very reliant on these vendors, they deploy [the tools] without any governance systems, Sheard says.

Employers dont have any bad intent, they want to do the right things but they have no idea what they should be doing. There are no internal oversight mechanisms set up, no independent auditing systems to ensure there is no bias.

Hyman says client feedback and independent research shows that the broader community is comfortable with recruiters using AI.

They need to have an experience that is inviting, inclusive and attracts more diversity, Hyman says. She says Sapias untimed, low-stress, text-based system fits this criteria.

You are twice as likely to get women and keep women in the hiring process when youre using AI. Its a complete fiction that people dont want it and dont trust it. We see the complete opposite in our data.

Research from the Diversity Council of Australia and Monash University is not quite so enthusiastic, showing there is a clear divide between employers and candidates who were converted or cautious about AI recruitment tools, with 50% of employers converted to the technology but only a third of job applicants. First Nations job applicants were among those most likely to be worried.

DCA recommends recruiters be transparent about the due diligence protocols they have in place to ensure AI-supported recruitment tools are bias-free, inclusive and accessible.

In the Sapia demonstration, the AI quickly generates brief notes of personality feedback at the end of the application for the interviewee.

This is based on how someone rates on various markers, including conscientiousness and agreeableness, which the AI matches with pre-written phrases that resemble something a life coach might say.

A more thorough assessment not visible to the applicant would be sent to the recruiter.

Sapia says its chat-interview software analysed language proficiency, with a profanity detector included too, with the company saying these were important considerations for customer-facing roles.

Hyman says the language analysis is based on the billion words of data collected from responses in the years since the tech company was founded in 2013. The data itself is proprietary.

So, could Guardian Australian work for Michael Scott at the fictional paper company Dunder Mifflin?

You are self-assured but not overly confident, the personality feedback says in response to Guardian Australias application in the AI demonstration.

It follows with a subtle suggestion that this applicant might not be a good fit for the receptionist role, which requires repetition, routine and following a defined process.

But it has some helpful advice: Potentially balance that with variety outside of work.

Looks like were not a good fit for this job.

{{topLeft}}

{{bottomLeft}}

{{topRight}}

{{bottomRight}}

{{.}}

Read more:
Robot recruiters: can bias be banished from AI hiring? - The Guardian