Archive for the ‘Ai’ Category

VeChain and SingularityNET team up on AI to fight climate change – Cointelegraph

Artificial intelligence firm SingularityNET and blockchain firm VeChain have become the latest firms to marry blockchain with artificial intelligence this time, with the aim of cutting down carbon emissions.

Over the last year, the crypto industry has seen an increasing amount of collaboration between blockchain and AI technology.

On Aug. 24, VeChain a smart contract-compatible blockchain used for supply-chain tracking announced a strategic collaboration with the decentralized AI services-sharing platform SingularityNET.

In a joint statement, the firms said the partnership will merge VeChains enterprise data with SingularityNET's advanced AI algorithms to enhance automation of manual processes and provide real-time data.

SingularityNET founder and CEO Ben Goertzel told Cointelegraph that blockchain and AI go hand-in-hand and can solve problems where traditional approaches often fail.

The last few years have taught the world that when the right AI algorithms meet the right data on sufficient processing power, magic can happen, said Goertzel.

Goertzel explained the partnership could, for example, allow AI to identify new ways to use VeChains blockchain data to optimize carbon emission output and minimize pollution.

Achieving a sustainable and environmentally positive economy is an extremely complex problem involving coordination of a large number of different economic players, he added.

Meanwhile, VeChain Chief Technology Officer Antonio Senatore added: Blockchain and AI offer game-changing capabilities for industries and enterprises and are opening new avenues of operation.

Related: Heres how blockchain and AI combine to redefine data security

In July, Bitcoin Miner Hive Blockchain changed its name and business strategy as part of its foray into the emerging field of AI.Hive Digital Technologies CEO Aydin Kilictold Cointelegraph in August that blockchain and AI are both pillars of Web3.

In June, Ethereum layer-2 scaling network Polygon announced its integration of AI technology. The AI interface called Polygon Copilot will help developers obtain analytics and insights for Dapps on the network.

Dr. Daoyuan Wu, an AI researcher from the Nanyang Technological University in Singapore and MetaTrust affiliate, told Cointelegraph that the inherent autonomy of AI aligns seamlessly with the decentralized and autonomous characteristics of blockchain and smart contracts, adding:

MetaTrust Labs is working on a project called GPTScan which works as a tool that combines Generative Pre-training Transformer (GPT) and static analysis to detect logic vulnerabilities in smart contracts.

GPTScan is the first tool of its kind that utilizes GPT to match candidate vulnerable functions based on code-level scenarios and properties," added Dr. Daoyuan in an interview with Cointelegraph.

Collect this article as an NFT to preserve this moment in history and show your support for independent journalism in the crypto space.

Magazine: How to prevent AI from annihilating humanity using blockchain

View original post here:

VeChain and SingularityNET team up on AI to fight climate change - Cointelegraph

Using Generative AI to Resurrect the Dead Will Create a Burden for … – WIRED

Given enough data, one can feel like its possible to keep dead loved ones alive. With ChatGPT and other powerful large language models, it is feasible to create a more convincing chatbot of a dead person. But doing so, especially in the face of scarce resources and inevitable decay, ignores the massive amounts of labor that go into keeping the dead alive online.

Someone always has to do the hard work of maintaining automated systems, as demonstrated by the overworked and underpaid annotators and content moderators behind generative AI, and this is also true where replicas of the dead are concerned. From managing a digital estate after gathering passwords and account information, to navigating a slowly-decaying inherited smart home, digital death care practices require significant upkeep. Content creators depend on the backend labor of caregivers and a network of human and nonhuman entities, from specific operating systems and devices to server farms, to keep digital heirlooms alive across generations. Updating formats and keeping those electronic records searchable, usable, and accessible requires labor, energy, and time. This is a problem for archivists and institutions, but also for individuals who might want to preserve the digital belongings of their dead kin.

And even with all of this effort, devices, formats, and websites also die, just as we frail humans do. Despite the fantasy of an automated home that can run itself in perpetuity or a website that can survive for centuries, planned obsolescence means these systems will most certainly decay. As people tasked with maintaining the digital belongings of dead loved ones can attest, there is a stark difference between what people think they want, or what they expect others to do, and the reality of what it means to help technologies persist over time. The mortality of both people and technology means that these systems will ultimately stop working.

Early attempts to create AI-backed replicas of dead humans certainly bear this out. Intellitars Virtual Eternity, based in Scottsdale, Arizona, launched in 2008 and used images and speech patterns to simulate a humans personality, perhaps filling in for someone at a business meeting or chatting with grieving loved ones after a persons death. Writing for CNET, a reviewer dubbed Intellitar the product most likely to make children cry. But soon after the company went under in 2012, its website disappeared. LifeNaut, a project backed by the transhumanist organization Terasemwhich is also known for creating BINA48, a robotic version of Bina Aspen, the wife of Terasems founderwill purportedly combine genetic and biometric information with personal datastreams to simulate a full-fledged human being once technology makes it possible to do so. But the projects site itself relies on outmoded Flash software, indicating that the true promise of digital immortality is likely far off and will require updates along the way.

With generative AI, there is speculation that we might be able to create even more convincing facsimiles of humans, including dead ones. But this requires vast resources, including raw materials, water, and energy, pointing to the folly of maintaining chatbots of the dead in the face of catastrophic climate change. It also has astronomical financial costs: ChatGPT purportedly costs $700,000 a day to maintain, and will bankrupt OpenAI by 2024. This is not a sustainable model for immortality.

There is also the question of who should have the authority to create these replicas in the first place: a close family member, an employer, a company? Not everyone would want to be reincarnated as a chatbot. In a 2021 piece for the San Francisco Chronicle, the journalist Jason Fagone recounts the story of a man named Joshua Barbeau who produced a chatbot version of his long-dead fiance Jessica using OpenAIs GPT-3. It was a way for him to cope with death and grief, but it also kept him invested in a close romantic relationship with a person who was no longer alive. This was also not the way that Jessicas other loved ones wanted to remember her; family members opted not to interact with the chatbot.

Go here to read the rest:

Using Generative AI to Resurrect the Dead Will Create a Burden for ... - WIRED

Warner Calls on Biden Administration to Remain Engaged in AI … – Senator Mark Warner

WASHINGTON U.S. Sen. Mark R.Warner(D-VA), Chairman of the Senate Select CommitteeonIntelligence,today urged the Bidenadministration to build on itsrecently announced voluntary commitmentsfrom several prominent artificial intelligence (AI) leaders in order to promote greater security, safety, and trust in the rapidly developing AI field.

As AI is rolled out more broadly, researchers have repeatedly demonstrated a number of concerning, exploitable weaknesses inprominentproducts, including abilitiestogenerate credible-seeming misinformation, developmalware,and craftsophisticatedphishing techniques. On Friday, the Biden administration announced that several AI companies had agreed to a series of measures that would promotegreatersecurity and transparency. Sen. Warner wrote to the administration to applaud these efforts and laid out a series of next steps to bolster this progress, including extending commitments to less capable models, seeking consumer-facing commitments, anddeveloping an engagement strategy to better addresssecurity risks.

These commitments have the potential to shape developer norms and best practices associated with leading-edge AI models. At the same time, even less capable models are susceptible to misuse, security compromise, and proliferation risks,Sen.Warnerwrote.As the current commitments stand, leading vendors do not appear inclined to extending these vital development commitments to the wider range of AI products they have released that fall below this threshold or have been released as open source models.

The letter builds on Sen. Warners continued advocacy for the responsible development and deployment of AI. InApril, Sen.Warnerdirectlyexpressed concerns to several AI CEOs about the potential risks posed byAI,and calledoncompaniestoensure that their productsandsystems are secure.

The letter also affirms Congress role in regulating AI, and expands on the annualIntelligence Authorization Act, legislation that recently passed unanimously through the Sente Select Committee on Intelligence. Sen. Warner urges theadministrationto adopt the strategy outlined in this pending bill as well as work with the FBI, CISA, ODNI, and other federal agencies to fully address the potential risks of AI technology.

Sen.Warner, a former tech entrepreneur,has been a vocal advocate for Big Tech accountabilityanda stronger national posture against cyberattacksandmisinformationonline. In addition to his April letters, has introduced several pieces of legislationaimed at addressing these issues, including theRESTRICT Act, which would comprehensively address theongoing threat posed by technology from foreign adversaries; theSAFE TECH Act,which would reform Section230andallow social mediacompaniestobe held accountable for enabling cyber-stalking,online harassment,anddiscriminationonsocial media platforms;andtheHonest Ads Act, which would requireonline political advertisementstoadheretothe same disclaimer requirements as TV, radio,andprintads.

A copy of thelettercan be foundhereandbelow.

Dear President Biden,

I write to applaud the Administrations significant efforts to secure voluntary commitments from leading AI vendors related to promoting greater security, safety, and trust through improved development practices. These commitments largely applicable to these vendors most advanced products can materially reduce a range of security and safety risks identified by researchers and developers in recent years. In April, I wrote to a number of these same companies, urging them to prioritize security and safety in their development, product release, and post-deployment practices. Among other things, I asked them to fully map dependencies and downstream implications of compromise of their systems; focus greater financial, technical and personnel resources on internal security; and improve their transparency practices through greater documentation of system capabilities, system limitations, and training data.

These commitments have the potential to shape developer norms and best practices associated with leading-edge AI models. At the same time, even less capable models are susceptible to misuse, security compromise, and proliferation risks. Moreover, a growing roster of highly-capable open source models have been released to the public and would benefit from similar pre-deployment commitments contained in a number of the July 21stobligations. As the current commitments stand, leading vendors do not appear inclined to extending these vital development commitments to the wider range of AI products they have released that fall below this threshold or have been released as open source models.

To be sure, responsibility ultimately lies with Congress to develop laws that advance consumer and patient safety, address national security and cyber-crime risks, and promote secure development practices in this burgeoning and highly consequential industry and in the downstream industries integrating their products. In the interim, the important commitments your Administration has secured can be bolstered in a number of important ways.

First, I strongly encourage your Administration to continue engagement with this industry to extend these all of these commitments more broadly to less capable models that, in part through their wider adoption, can produce the most frequent examples of misuse and compromise.

Second, it is vital to build on these developer- and researcher-facing commitments with a suite of lightweight consumer-facing commitments to prevent the most serious forms of abuse. Most prominent among these should be commitments from leading vendors to adopt development practices, licensing terms, and post-deployment monitoring practices that prevent non-consensual intimate image generation, social-scoring, real-time facial recognition (in contexts not governed by existing legal protections or due process safeguards), and proliferation activity in the context of malicious cyber activity or the production of biological or chemical agents.

Lastly, the Administrations successful high-level engagement with the leadership of these companies must be complemented by a deeper engagement strategy to track national security risks associated with these technologies. In June, the Senate Select Committee on Intelligence on a bipartisan basis advanced our annualIntelligence Authorization Act, a provision of which directed the President to establish a strategy to better engage vendors, downstream commercial users, and independent researchers on the security risks posed by, or directed at, AI systems.

This provision was spurred by conversations with leading vendors, who confided that they would not know how best to report malicious activity such as suspected intrusions of their internal networks, observed efforts by foreign actors to generate or refine malware using their tools, or identified activity by foreign malign actors to generate content to mislead or intimidate voters. To be sure, a highly-capable and well-established set of resources, processes, and organizations including the Cybersecurity and Infrastructure Security Agency, the Federal Bureau of Investigation, and the Office of the Director of National Intelligences Foreign Malign Influence Center exist to engage these communities, including through counter-intelligence education and defensive briefings. Nonetheless, it appears that these entities have not been fully activated to engage the range of key stakeholders in this space. For this reason, I would encourage you to pursue the contours of the strategy outlined in our pending bill.

Thank you for your Administrations important leadership in this area. I look forward to working with you to develop bipartisan legislation in this area.

###

Read more here:

Warner Calls on Biden Administration to Remain Engaged in AI ... - Senator Mark Warner

Advisory report begins integration of generative AI at U-M | The … – The University Record

A committee looking into how generative artificial intelligence affects University of Michigan students, faculty, researchers and staff has issued a report that attempts to lay a foundation for how U-M will live and work with this new technology.

Recommendations include:

The report is available to the public at a website created by the committee and Information and Technology Services to guide how faculty, staff and students can responsibly and effectively use GenAI in their daily lives.

U-M also has announced it will release its own suite of university-hosted GenAI services that are focused on providing safe and equitable access to AI tools for all members of the U-M community. They are expected to be released before students return to campus this fall.

GenAI is shifting paradigms in higher education, business, the arts and every aspect of our society. This report represents an important first step in U-Ms intention to serve as a global leader in fostering the responsible, ethical and equitable use of GenAI in our community and beyond, said Laurie McCauley, provost and executive vice president for academic affairs.

The report offers recommendations on everything from how instructors can effectively use GenAI in their classrooms to how students can protect themselves when using popular GenAI tools, such as ChatGPT, without exposing themselves to risks of sharing sensitive data.

More than anything, the intention of the report is to be a discussion starter, said Ravi Pendse, vice president for information technology and chief information officer. We have heard overwhelmingly from the university community that they needed some direction on how to work with GenAI, particularly before the fall semester started. We think this report and the accompanying website are a great start to some much-needed conversations.

McCauley and Pendse sponsored the creation of the Generative Artificial Intelligence Advisory Committee in May. Since then, the 18-member committee composed of faculty, staff and students from across all segments of U-M has worked together to provide vital insights into how GenAI technology could affect their communities.

Our goals were to present strategic directions and guidance on how GenAI can enhance the educational experience, enrich research capabilities, and bolster U-Ms leadership in this era of digital transformation, said committee chair Karthik Duraisamy, professor of aerospace engineering and of mechanical engineering, and director of the Michigan Institute for Computational Discovery and Engineering.

Committee members put in an enormous amount of work to identify the potential benefits of GenAI to the diverse missions of our university, while also shedding light on the opportunities and challenges of this rapidly evolving technology.

This is an exciting time, McCauley added. I am impressed by the work of this group of colleagues. Their report asks important questions and provides thoughtful guidance in a rapidly evolving area.

Pendse stressed the GenAI website will be constantly updated and will serve as a hub for the various discussions related to the topic across U-M.

We know that almost every group at U-M is having their own conversations about GenAI right now, Pendse said. With the release of this report and the website, we hope to create a knowledge hub where students, faculty and staff have one central location where they can come looking for advice. I am proud that U-M is serving both as a local and global leader when it comes to the use of GenAI.

Read the original here:

Advisory report begins integration of generative AI at U-M | The ... - The University Record

From Hollywood to Sheffield, these are the AI stories to read this month – World Economic Forum

AI regulation is progressing across the world as policymakers try to protect against the risks it poses without curtailing AI's potential.

In July, Chinese regulators introduced rules to oversee generative AI services. Their focus stems from a concern over the potential for generative AI to create content that conflicts with Beijings viewpoints.

The success of ChatGPT and similarly sophisticated AI bots have sparked announcements from Chinese technology firms to join the fray. These include Alibaba, which has launched an AI image generator to trial among its business customers.

The new regulation requires generative AI services in China to have a licence, conduct security assessments, and adhere to socialist values. If "illegal" content is generated, the relevant service provider must stop this, improve its algorithms, and report the offending material to the authorities.

The new rules relate only to generative AI services for the public, not to systems developed for research purposes or niche applications, striking a balance between keeping close tabs on AI while also making China a leader in this field.

The use of AI in film and TV is one of the issues behind the ongoing strike by Hollywood actors and writers that has led to production stoppages worldwide. As their unions renegotiate contracts, workers in the entertainment sector have come out to protest against their work being used to train AI systems that could ultimately replace them.

The AI proposal put forward by the Alliance of Motion Picture and Television Producers reportedly stated that background performers would receive one day's pay for getting their image scanned digitally. This scan would then be available for use by the studios from then on.

China is not alone in creating a framework for AI. A new law in the US regulates the influence of AI on recruitment as more of the hiring process is handed over to algorithms.

From browsing CVs and scoring interviews to scraping social media for personality profiles, recruiters are increasingly using the capabilities of AI to speed up and improve hiring. To protect workers against a potential AI bias, New York City's local government is mandating greater transparency about the use of AI and annual audits for potential bias in recruitment and promotion decisions.

A group of AI experts, including Meta, Google, and Samsung, has created a new framework for developing AI products safely. It consists of a checklist with 84 questions for developers to consider before starting an AI project. The World Ethical Data Foundation is also asking the public to submit their own questions ahead of its next conference. Since its launch, the framework has gained support from hundreds of signatories in the AI community.

In response to the uncertainties surrounding generative AI and the need for robust AI governance frameworks to ensure responsible and beneficial outcomes for all, the Forums Centre for the Fourth Industrial Revolution (C4IR) has launched the AI Governance Alliance.

The Alliance will unite industry leaders, governments, academic institutions, and civil society organizations to champion responsible global design and release of transparent and inclusive AI systems.

Meanwhile, generative AI is gaining a growing user base, sparked by the launch of ChatGPT last November. A survey by Deloitte found that more than a quarter of UK adults have used generative AI tools like chatbots. This is even higher than the adoption rate of voice-assisted speakers like Amazon's Alexa. Around one in 10 people also use AI at work.

Nearly a third of college students have admitted to using ChatGPT for written assignments such as college essays and high-school art projects. Companies providing AI-detecting tools have been run off their feet as teachers seek help identifying AI-driven cheating. With only one full academic semester since the launch of ChatGPT, AI detection companies are predicting even greater disruption and challenges as schools need to take comprehensive action.

30% of college students use ChatGPT for assignments, to varying degrees.

Image: Intelligent.com

Another area where AI could ring in fundamental changes is journalism. The New York Times, the Washington Post, and News Corp are among publishers talking to Google about using artificial intelligence tools to assist journalists in writing news articles. The tools could help with options for headlines and writing styles but are not intended to replace journalists. News about the talks comes after the Associated Press announced a partnership with OpenAI for the same purpose. However, some news outlets have been hesitant to adopt AI due to concerns about incorrect information and differentiating between human and AI-generated content.

Developers of robots and autonomous machines could learn lessons from honeybees when it comes to making fast and accurate decisions, according to scientists at the University of Sheffield. Bees trained to recognize different coloured flowers took only 0.6 seconds on average to decide to land on a flower they were confident would have food and vice versa. They also made more accurate decisions than humans, despite their small brains. The scientists have now built these findings into a computer model.

Generative AI is set to impact a vast range of areas. For the global economy, it could add trillions of dollars in value, according to a new report by McKinsey & Company. It also found that the use of generative AI could lead to labour productivity growth of 0.1-0.6% annually through 2040.

At the same time, generative AI could lead to an increase in cyberattacks on small and medium-sized businesses, which are particularly exposed to this risk. AI makes new, highly sophisticated tools available to cybercriminals. However, it can be used to create better security tools to detect attacks and deploy automatic responses, according to Microsoft.

Because AI systems are designed and trained by humans, they can generate biased results due to the design choices made by developers. AI may therefore be prone to perpetuating inequalities, and this can be overcome by training AI systems to recognize and overcome their own bias.

Read more from the original source:

From Hollywood to Sheffield, these are the AI stories to read this month - World Economic Forum