Archive for the ‘Artificial Intelligence’ Category

Asana launches new work intelligence tools with AI on the way – TechCrunch

Image Credits: Klaus Vedfelt / Getty Images

With all the software tools we have, its still hard to figure out how work moves across large organizations, the state of a project and how closely people are tracking against personal, team and company goals. Asana, which is built on top of a work graph, has that knowledge and the company announced a new set of dashboards today to give managers the data they need to make sure projects are staying on budget and meeting goals.

Alex Hood, chief product officer at Asana, says that the reporting capabilities put a set of information at managers fingertips that previously had to be manually pulled from various systems. Weve created executive reporting that can live at any altitude of the company. So no matter [your job], you can have your set of dashboards of the things that you care about that come in instantly just by selecting [in Asana] which teams and projects and portfolios that you care about, Hood told TechCrunch.

In practice, this involves providing a single view into strategic initiatives, team capacity and budgets. Hood says it builds on the graph model that underlies the entire Asana platform, but the company is working to bring artificial intelligence to the process to make it even smarter. The next step will be using AI to generate the portfolios of the things that you care about instantaneously. So having them become smarter and smarter, but the fact that they can be at any level across an entire organization, that is part of this new launch, he said.

In addition the platform now helps people understand the workload for any given skill across the organization to see who has bandwidth and who is overloaded to help companies more evenly distribute workloads, what Asana refers to as resource intelligence. We are shipping the ability to see the workload of anybody across the whole domain or organizationWe show [this data] in a very graphical format, so you can see whos burning out, whos under capacity, and you can load balance between them across an organization that doesnt even share a hierarchy, he said.

Finally, the company is offering a new tool it calls execution intelligence. This involves providing workflow templates that a company can use to build their own workflows themselves. Weve had workflows in our product for some time, but were making workflow bundles super easy to to pull off the shelf. So were going to have best practice workflows that are pre-constructed that you just plug your pieces into, he said.

While none of these tools has AI built into them just yet, Hood says the next iteration of these tools will definitely be built with it. So we have these three new features that are not AI-driven yet. But they are features that point our nose towards the types of problems that we want to solve with AI because the next generation that lives on top of them will be AI-driven, he said.

See original here:
Asana launches new work intelligence tools with AI on the way - TechCrunch

Goldman Sachs says generative A.I. could impact 300 million jobs here’s which ones – CNBC

Artificial intelligence could automate up to a quarter of work in the U.S., a Goldman Sachs report says.

Dowell | Moment | Getty Images

As artificial intelligence products like ChatGPT aim to become a part of our everyday lives and we learn more about how powerful they can be, there's one thing on everyone's mind: how AI could impact jobs.

"Significant disruption" could be on the horizon for the labor market, a new Goldman Sachs report dated Sunday said. The bank's analysis of jobs in the U.S. and Europe shows that two-thirds of jobs could be automated at least to some degree.

In the U.S., "of those occupations which are exposed, most have a significant but partial share of their workload (25-50%) that can be replaced," Goldman Sachs analysts said in the resarch paper.

Around the world, as many as 300 million jobs could be affected, the report says. Changes to labor markets are therefore likely although historically, technological progress doesn't just make jobs redundant, it also creates new ones.

The use of AI technology could also boost labor productivity growth and boost global GDP by as much as 7% over time, Goldman Sachs' report noted.

Certain jobs will be more impacted than others, the report explains. Jobs that require a lot of physical work are, for example, less likely to be significantly affected.

In the U.S., office and administrative support jobs have the highest proportion of tasks that could be automated with 46%, followed by 44% for legal work and 37% for tasks within architecture and engineering.

The life, physical and social sciences sector follows closely with 36%, and business and financial operations round out the top five with 35%.

On the other end of the scale, just 1% of tasks in the building and ground cleanings and maintenance sector are vulnerable to automation. Installation, maintenance, and repair work is the second least affected industry with 4% of work potentially being affected, and construction and extraction comes third from the bottom with 6%.

Data for Europe is slightly broader, but paints a similar picture with clerical support roles being most affected as 45% of their work could be automated, and just 4% of work in the crafts and related trades sector being vulnerable.

Overall, 24% of work in Europe could be automated just below the 25% average in the U.S.

These figures shift when looking at automation through AI on a global scale.

"Our estimates intuitively suggest that fewer jobs in EMs [emerging markets] are exposed to automation than in DMs [developed markets], but that 18% of work globally could be automated by AI on an employment-weighted basis," the Goldman Sachs report said.

According to the bank's analysis, Hong Kong, Israel, Japan, Sweden and the U.S. are likely to be the top five most affected countries. Meanwhile, employees in mainland China, Nigeria, Vietnam, Kenya and, in last place, India, are the least likely to see their work being taken over by AI technology.

But while the data shows that AI will undoubtedly impact the labor market, it's not yet clear how disruptive it will really be, the report concludes.

"The impact of AI will ultimately depend on its capability and adoption timeline," it says, adding that two key factors will be how powerful AI technology really becomes and how much it is used in practice.

See the original post here:
Goldman Sachs says generative A.I. could impact 300 million jobs here's which ones - CNBC

Researchers Identify 6 Challenges Humans Face With Artificial Intelligence – Neuroscience News

Summary: Study identifies six factors humans must overcome to insure artificial intelligence is trustworthy, safe, reliable, and compatible with human values.

Source: University of Central Florida

A University of Central Florida professor and 26 other researchers have published a study identifying the challenges humans must overcome to ensure that artificial intelligence is reliable, safe, trustworthy and compatible with human values.

The study,Six Human-Centered Artificial Intelligence Grand Challenges, was published in theInternational Journal of Human-ComputerInteraction.

Ozlem Garibay 01MS 08PhD, an assistant professor in UCFsDepartment of Industrial Engineering and Management Systems, was the lead researcher for the study. She says that the technology has become more prominent in many aspects of our lives, but it also has brought about many challenges that must be studied.

For instance, the coming widespread integration of artificial intelligence could significantly impact human life in ways that are not yet fully understood, says Garibay, who works on AI applications in material anddrug design and discovery, and how AI impacts social systems.

The six challenges Garibay and the team of researchers identified are:

The study, which was conducted over 20 months, comprises the views of 26 international experts who have diverse backgrounds in AI technology.

These challenges call for the creation of human-centered artificial intelligence technologies that prioritize ethicality, fairness and the enhancement of human well-being, Garibay says.

The challenges urge the adoption of a human-centered approach that includes responsible design, privacy protection, adherence to human-centered design principles, appropriate governance and oversight, and respectful interaction with human cognitive capacities.

Overall, these challenges are a call to action for the scientific community to develop and implement artificial intelligence technologies that prioritize and benefit humanity, she says.

The group of 26 experts include National Academy of Engineering members and researchers from North America, Europe and Asia who have broad experiences across academia, industry and government. The group also has diverse educational backgrounds in areas ranging from computer science and engineering to psychology and medicine.

Their work also will be featured in a chapter in the book, Human-Computer Interaction: Foundations, Methods, Technologies, and Applications.

Five UCF faculty members co-authored the study:

Garibay received her doctorate in computer science from UCF and joined UCFs Department of Industrial Engineering and Management Systems, part of theCollege of Engineering and Computer Science, in 2020.

Author: Robert WellsSource: University of Central FloridaContact: Robert Wells University of Central FloridaImage: The image is in the public domain

Original Research: Open access. Six Human-Centered Artificial Intelligence Grand Challenges by Ozlem Garibay et al. International Journal of Human-Computer Interaction

Abstract

Six Human-Centered Artificial Intelligence Grand Challenges

Widespread adoption of artificial intelligence (AI) technologies is substantially affecting the human condition in ways that are not yet well understood.

Negative unintended consequences abound including the perpetuation and exacerbation of societal inequalities and divisions via algorithmic decision making.

We present six grand challenges for the scientific community to create AI technologies that are human-centered, that is, ethical, fair, and enhance the human condition.

These grand challenges are the result of an international collaboration across academia, industry and government and represent the consensus views of a group of 26 experts in the field of human-centered artificial intelligence (HCAI).

In essence, these challenges advocate for a human-centered approach to AI that (1) is centered in human well-being, (2) is designed responsibly, (3) respects privacy, (4) follows human-centered design principles, (5) is subject to appropriate governance and oversight, and (6) interacts with individuals while respecting humans cognitive capacities.

We hope that these challenges and their associated research directions serve as a call for action to conduct research and development in AI that serves as a force multiplier towards more fair, equitable and sustainable societies.

See the original post:
Researchers Identify 6 Challenges Humans Face With Artificial Intelligence - Neuroscience News

Can artificial intelligence write fiction with real tension? – Financial Times

What is included in my trial?

During your trial you will have complete digital access to FT.com with everything in both of our Standard Digital and Premium Digital packages.

Standard Digital includes access to a wealth of global news, analysis and expert opinion. Premium Digital includes access to our premier business column, Lex, as well as 15 curated newsletters covering key business themes with original, in-depth reporting. For a full comparison of Standard and Premium Digital, click here.

Change the plan you will roll onto at any time during your trial by visiting the Settings & Account section.

If you do nothing, you will be auto-enrolled in our premium digital monthly subscription plan and retain complete access for $69 per month.

For cost savings, you can change your plan at any time online in the Settings & Account section. If youd like to retain your premium access and save 20%, you can opt to pay annually at the end of the trial.

You may also opt to downgrade to Standard Digital, a robust journalistic offering that fulfils many users needs. Compare Standard and Premium Digital here.

Any changes made can be done at any time and will become effective at the end of the trial period, allowing you to retain full access for 4 weeks, even if you downgrade or cancel.

You may change or cancel your subscription or trial at any time online. Simply log into Settings & Account and select "Cancel" on the right-hand side.

You can still enjoy your subscription until the end of your current billing period.

We support credit card, debit card and PayPal payments.

Read the original:
Can artificial intelligence write fiction with real tension? - Financial Times

ChatGPT in the Humanities Panel: Researchers Share Concerns … – Cornell University The Cornell Daily Sun

Does the next Aristotle, Emily Dickinson or Homer live on your computer? A group of panelists explored this idea in a talk titled Chat GPT and the Humanities on Friday in the A.D. White Houses Guerlac Room.

ChatGPTs ability to produce creative literature was one of the central topics explored in the talk as the discourse on the use of artificial intelligence software in academic spheres continues to grow.

In the panel, Prof. Morten Christiansen, psychology, Prof. Laurent Dubreuil, comparative literature, Pablo Contreras Kallens grad and Jacob Matthews grad explored the benefits and consequences of utilizing artificial intelligence within humanities research and education.

The forum was co-sponsored by the Society for the Humanities, the Humanities Lab and the New Frontier Grant program.

The Society for the Humanities was established in 1966 and connects visiting fellows, Cornell faculty and graduate students to conduct interdisciplinary research connected to an annual theme. This years focal theme is Repair which refers to the conservation, restoration and replication of objects, relations and histories.

All four panelists are members of the Humanities Lab, which works to provide an intellectual space for scholars to pursue research relating to the interaction between the sciences and the humanities. The lab was founded by Dubreuil in 2019 and is currently led by him.

Christiansen and Dubreuil also recently received New Frontier Grants for their project titled Poetry, AI and the Mind: A Humanities-Cognitive Science Transdisciplinary Exploration, which focuses on the application of artificial intelligence to literature, cognitive science and mental and cultural diversity. For well over a year, they have worked on an experiment comparing humans poetry generation to that of ChatGPT, with the continuous help of Contreras Kallens and Matthews.

Before the event began, attendees expressed their curiosity and concerns about novel AI technology.

Lauren Scheuer, a writing specialist at the Keuka College Writing Center and Tompkins County local, described worries about the impact of ChatGPT on higher education.

Im concerned about how ChatGPT is being used to teach and to write and to generate content, Scheuer said.

Sarah Milliron grad, who is pursuing a Ph.D. in psychology, also said that she was concerned about ChatGPTs impact on academia as the technology becomes more widely used.

I suppose Im hoping [to gain] a bit of optimism [from this panel], Milliron said. I hope that they address ways that we can work together with AI as opposed to [having] it be something that we ignore or have it be something that we are trying to get rid of.

Dubreuil first explained that there has been a recent interest in artificial intelligence due to the impressive performance of ChatGPT and its successful marketing campaign.

All scholars, but especially humanities, are currently wondering if we should take into account the new capabilities of automated text generators, Dubreuil said.

Dubreuil expressed that scholars have varying concerns and ideas regarding ChatGPT.

Some [scholars] believe we should counteract [ChatGPTs consequences] by means of new policies, Dubreuil said. Other [scholars] complained about the lack of morality or the lack of political apropos that is exhibited by ChatGPT. Other [scholars] say that there is too much political apropos and political correctness.

Dubreuil noted that other scholars prophesy that AI could lead to the fall of humanity.

For example, historian Yuval Harari recently wrote about the 2022 Expert Survey on Progress in AI, which found that out of more than 700 surveyed top academics and researchers, half said that there was at least a 10 percent chance of human extinction or similarly permanent and severe disempowerment due to future AI systems.

Contreras Kallens then elaborated on their poetry experiment, which utilized what he referred to as fragment completion essentially, ChatGPT and Cornell undergraduates were both prompted to continue writing from two lines of poetry from an author such as Dickinson.

Contreras Kallens described that ChatGPT generally matched the poetry quality of a Cornell undergraduate, while expectedly falling short of the original authors writing. However, the author recognition program they used actually confused the artificial productions with the original authors work.

The final part of the project, which the group is currently refining, will measure whether students can differentiate between whether a fragment was completed by the original author, an undergraduate or by ChatGPT.

When describing the importance of this work, Contreras Kallens explained the concept of universal grammar a linguistics theory that suggests that people are innately biologically programmed to learn grammar. Thus, ChatGPTs being able to reach the writing quality of many humans challenges assumptions about technologys shortcomings.

[This model] invites a deeper reconsideration of language assumptions or language acquisition processing, Contreras Kallens said. And thats at least interesting.

Matthews then expressed that his interest in AI does not lie in its generative abilities but in the possibility of representing text numerically and computationally.

Often humanists are dealing with large volumes of text [and] they might be very different, Matthews said. [It is] fundamental to the humanities that we debate [with each other] about what texts mean, how they relate to one another were always putting different things into relation with one another. And it would be nice sometimes to have a computational or at least quantitative basis that we could maybe talk about, or debate or at least have access to.

Matthews described that autoregressive language models which refer to machine learning models that use past behavior models to predict the following word in a text reveal the perceived similarity between certain words.

Through assessing word similarity, Matthews found that ChatGPT contains gendered language bias, which he said reflects the bias in human communication.

For example, Matthews inputted the names Mary and James the most common female and male names in the United States along with Sam, which was used as a gender-neutral name. He found that James is closer to the occupations of lawyer, programmer and doctor than the other names, particularly Mary.

Matthews explained that these biases were more prevalent in previous language modeling systems, but that the makers of GPT-3.5 the embedding model of ChatGPT, as opposed to GPT-3, which is the model currently available to the public have acknowledged bias in their systems.

Its not just that [these models] learn language theyre also exposed to biases that are present in text, Matthews said. This can be visible in social contexts especially, and if were deploying these models, this has consequences if theyre used in decision making.

Matthews also demonstrated that encoding systems can textually analyze and compare literary works, such as those by Shakespeare and Dickinson, making them a valuable resource for humanists, especially regarding large texts.

Humanists are already engaged in thinking about these types of questions [referring to the models semantics and cultural analyses], Matthews said. But we might not have the capacity or the time to analyze the breadth of text that we want to and we might not be able to assign or even to recall all the things that were reading. So if were using this in parallel with the existing skill sets that humanists have, I think that this is really valuable.

Christiansen, who is part of a new University-wide committee looking into the potential use of generative AI, then talked about the opportunities and challenges of the use of AI in education and teaching.

Christiansen described that one positive pedagogical use of ChatGPT is to have students ask the software specific questions and then for the students to criticize the answers. He also explained that ChatGPT may help with the planning process of writing, which he noted many students frequently discount.

I think also, importantly, that [utilizing ChatGPT in writing exercises] can actually provide a bit of a level playing field for second language learners, of which we have many here at Cornell, Christiansen said.

Christiansen added that ChatGPT can act as a personal tutor, help students develop better audience sensitivity, work as a translator and provide summaries.

However, these models also have several limitations. For instance, ChatGPT knows very little about any events that occurred after September 2021 and will be clueless about recent issues, such as the Ukraine war.

Furthermore, Christiansen emphasized that these models can and will hallucinate which refers to their making up information, including falsifying references. He also noted that students could potentially use ChatGPT to violate academic integrity.

Overall, Dubreuil expressed concern for the impact of technologies such as ChatGPT on innovation. He explained that ChatGPT currently only reorganizes data, which falls short of true invention.

There is a wide range between simply incremental inventions and rearrangements that are such that they not only rearrange the content, but they reconfigure the given and the way the given was produced its meanings, its values and its consequences, Dubreuil said.

Dubreuil argued that if standards for human communication do not require invention, not only will AI produce work that is not truly creative, but humans may become less inventive as well.

It has to be said that through social media, especially through our algorithmic life, these days, we may have prepared our own minds to become much more similar to a chatbot. We may be reprogramming ourselves constantly and thats the danger, Dubreuil said. The challenge of AI is a provocation toward reform.

Correction, March 27, 2:26 p.m.: A previous version of this article incorrectly stated the time frame about which ChatGPT is familiar and the current leaders of the Humanities Lab. In addition, minor clarification has been added to the description of Christiansen and Dubreuils study on AI poetry generation. The Sun regrets these errors, and the article has been corrected.

More:
ChatGPT in the Humanities Panel: Researchers Share Concerns ... - Cornell University The Cornell Daily Sun