Archive for the ‘Ai’ Category

Warner Calls on Biden Administration to Remain Engaged in AI … – Senator Mark Warner

WASHINGTON U.S. Sen. Mark R.Warner(D-VA), Chairman of the Senate Select CommitteeonIntelligence,today urged the Bidenadministration to build on itsrecently announced voluntary commitmentsfrom several prominent artificial intelligence (AI) leaders in order to promote greater security, safety, and trust in the rapidly developing AI field.

As AI is rolled out more broadly, researchers have repeatedly demonstrated a number of concerning, exploitable weaknesses inprominentproducts, including abilitiestogenerate credible-seeming misinformation, developmalware,and craftsophisticatedphishing techniques. On Friday, the Biden administration announced that several AI companies had agreed to a series of measures that would promotegreatersecurity and transparency. Sen. Warner wrote to the administration to applaud these efforts and laid out a series of next steps to bolster this progress, including extending commitments to less capable models, seeking consumer-facing commitments, anddeveloping an engagement strategy to better addresssecurity risks.

These commitments have the potential to shape developer norms and best practices associated with leading-edge AI models. At the same time, even less capable models are susceptible to misuse, security compromise, and proliferation risks,Sen.Warnerwrote.As the current commitments stand, leading vendors do not appear inclined to extending these vital development commitments to the wider range of AI products they have released that fall below this threshold or have been released as open source models.

The letter builds on Sen. Warners continued advocacy for the responsible development and deployment of AI. InApril, Sen.Warnerdirectlyexpressed concerns to several AI CEOs about the potential risks posed byAI,and calledoncompaniestoensure that their productsandsystems are secure.

The letter also affirms Congress role in regulating AI, and expands on the annualIntelligence Authorization Act, legislation that recently passed unanimously through the Sente Select Committee on Intelligence. Sen. Warner urges theadministrationto adopt the strategy outlined in this pending bill as well as work with the FBI, CISA, ODNI, and other federal agencies to fully address the potential risks of AI technology.

Sen.Warner, a former tech entrepreneur,has been a vocal advocate for Big Tech accountabilityanda stronger national posture against cyberattacksandmisinformationonline. In addition to his April letters, has introduced several pieces of legislationaimed at addressing these issues, including theRESTRICT Act, which would comprehensively address theongoing threat posed by technology from foreign adversaries; theSAFE TECH Act,which would reform Section230andallow social mediacompaniestobe held accountable for enabling cyber-stalking,online harassment,anddiscriminationonsocial media platforms;andtheHonest Ads Act, which would requireonline political advertisementstoadheretothe same disclaimer requirements as TV, radio,andprintads.

A copy of thelettercan be foundhereandbelow.

Dear President Biden,

I write to applaud the Administrations significant efforts to secure voluntary commitments from leading AI vendors related to promoting greater security, safety, and trust through improved development practices. These commitments largely applicable to these vendors most advanced products can materially reduce a range of security and safety risks identified by researchers and developers in recent years. In April, I wrote to a number of these same companies, urging them to prioritize security and safety in their development, product release, and post-deployment practices. Among other things, I asked them to fully map dependencies and downstream implications of compromise of their systems; focus greater financial, technical and personnel resources on internal security; and improve their transparency practices through greater documentation of system capabilities, system limitations, and training data.

These commitments have the potential to shape developer norms and best practices associated with leading-edge AI models. At the same time, even less capable models are susceptible to misuse, security compromise, and proliferation risks. Moreover, a growing roster of highly-capable open source models have been released to the public and would benefit from similar pre-deployment commitments contained in a number of the July 21stobligations. As the current commitments stand, leading vendors do not appear inclined to extending these vital development commitments to the wider range of AI products they have released that fall below this threshold or have been released as open source models.

To be sure, responsibility ultimately lies with Congress to develop laws that advance consumer and patient safety, address national security and cyber-crime risks, and promote secure development practices in this burgeoning and highly consequential industry and in the downstream industries integrating their products. In the interim, the important commitments your Administration has secured can be bolstered in a number of important ways.

First, I strongly encourage your Administration to continue engagement with this industry to extend these all of these commitments more broadly to less capable models that, in part through their wider adoption, can produce the most frequent examples of misuse and compromise.

Second, it is vital to build on these developer- and researcher-facing commitments with a suite of lightweight consumer-facing commitments to prevent the most serious forms of abuse. Most prominent among these should be commitments from leading vendors to adopt development practices, licensing terms, and post-deployment monitoring practices that prevent non-consensual intimate image generation, social-scoring, real-time facial recognition (in contexts not governed by existing legal protections or due process safeguards), and proliferation activity in the context of malicious cyber activity or the production of biological or chemical agents.

Lastly, the Administrations successful high-level engagement with the leadership of these companies must be complemented by a deeper engagement strategy to track national security risks associated with these technologies. In June, the Senate Select Committee on Intelligence on a bipartisan basis advanced our annualIntelligence Authorization Act, a provision of which directed the President to establish a strategy to better engage vendors, downstream commercial users, and independent researchers on the security risks posed by, or directed at, AI systems.

This provision was spurred by conversations with leading vendors, who confided that they would not know how best to report malicious activity such as suspected intrusions of their internal networks, observed efforts by foreign actors to generate or refine malware using their tools, or identified activity by foreign malign actors to generate content to mislead or intimidate voters. To be sure, a highly-capable and well-established set of resources, processes, and organizations including the Cybersecurity and Infrastructure Security Agency, the Federal Bureau of Investigation, and the Office of the Director of National Intelligences Foreign Malign Influence Center exist to engage these communities, including through counter-intelligence education and defensive briefings. Nonetheless, it appears that these entities have not been fully activated to engage the range of key stakeholders in this space. For this reason, I would encourage you to pursue the contours of the strategy outlined in our pending bill.

Thank you for your Administrations important leadership in this area. I look forward to working with you to develop bipartisan legislation in this area.

###

Read more here:

Warner Calls on Biden Administration to Remain Engaged in AI ... - Senator Mark Warner

Advisory report begins integration of generative AI at U-M | The … – The University Record

A committee looking into how generative artificial intelligence affects University of Michigan students, faculty, researchers and staff has issued a report that attempts to lay a foundation for how U-M will live and work with this new technology.

Recommendations include:

The report is available to the public at a website created by the committee and Information and Technology Services to guide how faculty, staff and students can responsibly and effectively use GenAI in their daily lives.

U-M also has announced it will release its own suite of university-hosted GenAI services that are focused on providing safe and equitable access to AI tools for all members of the U-M community. They are expected to be released before students return to campus this fall.

GenAI is shifting paradigms in higher education, business, the arts and every aspect of our society. This report represents an important first step in U-Ms intention to serve as a global leader in fostering the responsible, ethical and equitable use of GenAI in our community and beyond, said Laurie McCauley, provost and executive vice president for academic affairs.

The report offers recommendations on everything from how instructors can effectively use GenAI in their classrooms to how students can protect themselves when using popular GenAI tools, such as ChatGPT, without exposing themselves to risks of sharing sensitive data.

More than anything, the intention of the report is to be a discussion starter, said Ravi Pendse, vice president for information technology and chief information officer. We have heard overwhelmingly from the university community that they needed some direction on how to work with GenAI, particularly before the fall semester started. We think this report and the accompanying website are a great start to some much-needed conversations.

McCauley and Pendse sponsored the creation of the Generative Artificial Intelligence Advisory Committee in May. Since then, the 18-member committee composed of faculty, staff and students from across all segments of U-M has worked together to provide vital insights into how GenAI technology could affect their communities.

Our goals were to present strategic directions and guidance on how GenAI can enhance the educational experience, enrich research capabilities, and bolster U-Ms leadership in this era of digital transformation, said committee chair Karthik Duraisamy, professor of aerospace engineering and of mechanical engineering, and director of the Michigan Institute for Computational Discovery and Engineering.

Committee members put in an enormous amount of work to identify the potential benefits of GenAI to the diverse missions of our university, while also shedding light on the opportunities and challenges of this rapidly evolving technology.

This is an exciting time, McCauley added. I am impressed by the work of this group of colleagues. Their report asks important questions and provides thoughtful guidance in a rapidly evolving area.

Pendse stressed the GenAI website will be constantly updated and will serve as a hub for the various discussions related to the topic across U-M.

We know that almost every group at U-M is having their own conversations about GenAI right now, Pendse said. With the release of this report and the website, we hope to create a knowledge hub where students, faculty and staff have one central location where they can come looking for advice. I am proud that U-M is serving both as a local and global leader when it comes to the use of GenAI.

Read the original here:

Advisory report begins integration of generative AI at U-M | The ... - The University Record

From Hollywood to Sheffield, these are the AI stories to read this month – World Economic Forum

AI regulation is progressing across the world as policymakers try to protect against the risks it poses without curtailing AI's potential.

In July, Chinese regulators introduced rules to oversee generative AI services. Their focus stems from a concern over the potential for generative AI to create content that conflicts with Beijings viewpoints.

The success of ChatGPT and similarly sophisticated AI bots have sparked announcements from Chinese technology firms to join the fray. These include Alibaba, which has launched an AI image generator to trial among its business customers.

The new regulation requires generative AI services in China to have a licence, conduct security assessments, and adhere to socialist values. If "illegal" content is generated, the relevant service provider must stop this, improve its algorithms, and report the offending material to the authorities.

The new rules relate only to generative AI services for the public, not to systems developed for research purposes or niche applications, striking a balance between keeping close tabs on AI while also making China a leader in this field.

The use of AI in film and TV is one of the issues behind the ongoing strike by Hollywood actors and writers that has led to production stoppages worldwide. As their unions renegotiate contracts, workers in the entertainment sector have come out to protest against their work being used to train AI systems that could ultimately replace them.

The AI proposal put forward by the Alliance of Motion Picture and Television Producers reportedly stated that background performers would receive one day's pay for getting their image scanned digitally. This scan would then be available for use by the studios from then on.

China is not alone in creating a framework for AI. A new law in the US regulates the influence of AI on recruitment as more of the hiring process is handed over to algorithms.

From browsing CVs and scoring interviews to scraping social media for personality profiles, recruiters are increasingly using the capabilities of AI to speed up and improve hiring. To protect workers against a potential AI bias, New York City's local government is mandating greater transparency about the use of AI and annual audits for potential bias in recruitment and promotion decisions.

A group of AI experts, including Meta, Google, and Samsung, has created a new framework for developing AI products safely. It consists of a checklist with 84 questions for developers to consider before starting an AI project. The World Ethical Data Foundation is also asking the public to submit their own questions ahead of its next conference. Since its launch, the framework has gained support from hundreds of signatories in the AI community.

In response to the uncertainties surrounding generative AI and the need for robust AI governance frameworks to ensure responsible and beneficial outcomes for all, the Forums Centre for the Fourth Industrial Revolution (C4IR) has launched the AI Governance Alliance.

The Alliance will unite industry leaders, governments, academic institutions, and civil society organizations to champion responsible global design and release of transparent and inclusive AI systems.

Meanwhile, generative AI is gaining a growing user base, sparked by the launch of ChatGPT last November. A survey by Deloitte found that more than a quarter of UK adults have used generative AI tools like chatbots. This is even higher than the adoption rate of voice-assisted speakers like Amazon's Alexa. Around one in 10 people also use AI at work.

Nearly a third of college students have admitted to using ChatGPT for written assignments such as college essays and high-school art projects. Companies providing AI-detecting tools have been run off their feet as teachers seek help identifying AI-driven cheating. With only one full academic semester since the launch of ChatGPT, AI detection companies are predicting even greater disruption and challenges as schools need to take comprehensive action.

30% of college students use ChatGPT for assignments, to varying degrees.

Image: Intelligent.com

Another area where AI could ring in fundamental changes is journalism. The New York Times, the Washington Post, and News Corp are among publishers talking to Google about using artificial intelligence tools to assist journalists in writing news articles. The tools could help with options for headlines and writing styles but are not intended to replace journalists. News about the talks comes after the Associated Press announced a partnership with OpenAI for the same purpose. However, some news outlets have been hesitant to adopt AI due to concerns about incorrect information and differentiating between human and AI-generated content.

Developers of robots and autonomous machines could learn lessons from honeybees when it comes to making fast and accurate decisions, according to scientists at the University of Sheffield. Bees trained to recognize different coloured flowers took only 0.6 seconds on average to decide to land on a flower they were confident would have food and vice versa. They also made more accurate decisions than humans, despite their small brains. The scientists have now built these findings into a computer model.

Generative AI is set to impact a vast range of areas. For the global economy, it could add trillions of dollars in value, according to a new report by McKinsey & Company. It also found that the use of generative AI could lead to labour productivity growth of 0.1-0.6% annually through 2040.

At the same time, generative AI could lead to an increase in cyberattacks on small and medium-sized businesses, which are particularly exposed to this risk. AI makes new, highly sophisticated tools available to cybercriminals. However, it can be used to create better security tools to detect attacks and deploy automatic responses, according to Microsoft.

Because AI systems are designed and trained by humans, they can generate biased results due to the design choices made by developers. AI may therefore be prone to perpetuating inequalities, and this can be overcome by training AI systems to recognize and overcome their own bias.

Read more from the original source:

From Hollywood to Sheffield, these are the AI stories to read this month - World Economic Forum

eXp’s Glenn Sanford on AI’s transformative impact in real estate – HousingWire

Sanford firmly believes that AI is not just a buzzword but a game changer that holds the key to unlocking extraordinary opportunities within the real estate sphere.

AI is not about replacing real estate professionals; its about enhancing their abilities and the overall customer journey, asserts Sanford, emphasizing his commitment to leveraging AI as a collaborative tool rather than a divisive force in the industry. Unlike those hesitant to embrace change, Sanford recognizes the immense potential AI brings to the table and views it as an indispensable asset that can elevate agents proficiency and effectiveness.

I am an entrepreneur at heart, which means I think like a true entrepreneur, its less about P&L. Im not building a business to fund a lifestyle. Most entrepreneurs would rather be broke than have a mediocre business thats technically profitable, he says. Its this mindset, what he calls the mindset of a person that builds a start up that encourages him to radical things [such as investing in AI], he says. You realize that you can crash and burn a number of times while building something that finally gets traction.

However, Sanford has no plans to crash and burn with the AI-driven solutions tailored explicitly to cater to the ever-changing demands of the modern real estate market. Were starting to make investments into various companies on the edges. We want to create opportunities for people to merge their new ideas inside the city of eXp that would benefit agents, brokers and staff. That includes eXp Ventures, to foster innovation. How do we take from companies that have done well and innovate in a modern way?

By harnessing the power of machine-learning algorithms, eXp Realtys agents can now gain unprecedented insights into market trends, accurately predict property values, and efficiently match buyers with their dream homes.

Weve got a number of instances around the company, and were going to use other instances of either generative AI or image AI. We are already doing some image AI, says Sanford. Were already working AI into our search solutions, like Zoocasa and others. So, youll be able to use natural language search when searching for property. So, the stuff that Zillows doing, were incorporating, he says.

Real estate agents are going to get seriously disrupted by AI, says Sanford, but not in the value of the real estate agent, but more in the way things are done. Think about the [possibility] that lead follow up and nurturing campaigns will be managed by AI in the future. Look at platforms like Synthesia, [an AI video generator]. At eXp, we have a partnership with Blended Sense, [a content creation platform], so agents can do a video using Blended Sense [then upload] that into Synthesia, says Sanford.

The agent can then add in content about their local community thats generated by ChatGPT-4 and pump it into Synthesia. They can self-narrate with their voice using an AI-generated version of themselves with AI-generated content. And in some cases, the consumer wont even know it wasnt the agent actually providing that information, he says.

Sanford envisions a future where AI-driven chatbots effortlessly handle routine inquiries, freeing up valuable time for agents to focus on building deeper connections with clients and offering tailored guidance throughout the real estate journey. The true essence of real estate lies in nurturing meaningful relationships, Sanford says, and AI should serve as a seamless enabler rather than an intrusive barrier in achieving that.

While some may view AI as an accessory, Sanford passionately believes that integrating AI is essential in fortifying the industrys foundation for generations to come. He envisions a day when AI algorithms will go beyond predictive analytics and assist agents in curating personalized property recommendations that align perfectly with their clients preferences and lifestyles.

Moreover, Sanford is not one to rest on his laurels; he relentlessly invests in research and development to push the boundaries of what AI can accomplish for the real estate world. Sanfords commitment to staying ahead of the technological curve is driven by his belief that embracing AI wholeheartedly is not an option but a necessity to remain relevant in an ever-accelerating digital era.

When it comes to integrating AI into your brokerage, Sanford sums it up this way: The reality is that it doesnt matter what the controversy is. Its literally those who dont use AI will work for people who use AI.

Read this article:

eXp's Glenn Sanford on AI's transformative impact in real estate - HousingWire

Citi stays positive on A.I. theme and lays out the key to finding … – CNBC

The early innings of the artificial intelligence trade may be over, but Citigroup is staying positive on the tech subsector, viewing cash flows as the key to unlocking the winners of the next phase. "In sum, our message is not to be overly deterred by the significant year-to-date move in profitable AI stocks," the bank said in a Friday note to clients. "Medium- to long-term opportunities still exist as the AI theme has an accelerating growth trajectory and attractive [free cash flow] dynamics that should further improve from here." So far this year, anything connected to AI has seen a significant uptick in valuation, with Nvidia shares leading the pack, surging more than 200%. While the jaw-dropping price action may suggest AI is no longer an early trade, Citi reiterated that the "initial positive thesis" looks intact and warned investors to avoid overlooking free cash flows. Citi expects many names to meet accelerated growth expectations and views free cash flows as "increasingly compelling." "Profitable stocks within this theme are already impressive cash generating machines," the bank wrote. "Recent AI developments should accentuate this characteristic and push FCF margins and growth to new highs." Given this setup, Citi screened for AI-related stocks expected to outpace market growth expectations and experience an uptick in free cash flow margins. Here are some of the stocks that made the cut: Amazon has the highest consensus expectation of more than 48% growth over the long term. Shares have gained almost 54% this year as Wall Street rotates back into technology stocks following the slump in 2022. Some investors have viewed the e-commerce giant as lagging behind its peers in the AI race. During an i nterview with CNBC this month, CEO Andy Jassy soothed some of those concerns, reiterating Amazon's plan to invest in AI across segments. Earlier this year , Amazon also unveiled a generative AI service called Bedrock for its Amazon Web Services unit, allowing clients to use language models to create their own chatbots and image-generation services. Competing chatbot heavyweight Alphabet also made the cut. Shares of the Google parent and Bard creator have rallied 38% as it battles it out with Microsoft -backed OpenAI's ChatGPT. Consensus estimates peg long-term growth at more than 17%, with a near-term free cash flow margin of nearly 24%. GOOGL YTD mountain Alphabet shares in 2023 A handful of financial stocks were also included in Citi's screen. Mastercard offers the greatest near-term free cash flow yield of the group, at 48.4%. Its long-term consensus growth estimate hovers around 19%. Shares have gained about 15% year to date. Ford Motor , Match Group and ServiceNow also made the list. CNBC's Michael Bloom contributed reporting.

Read more:

Citi stays positive on A.I. theme and lays out the key to finding ... - CNBC