Archive for the ‘Ai’ Category

The era of AI: Transformative AI solutions powering the energy and … – Microsoft

Energy and resources companies face the tremendous challenge of providingsecure and reliableenergy for 8.1 billion people and growing while moving toward a carbon-free world. Under pressure to adapt quickly to changing demands, regulations, and technologies, the energy sector is turning to AI to accelerate the energy transition and operate more efficiently, safely, and sustainably.

Todays headlines are dominated by news about AI, from the latest discussions about Microsoft Copilot to ways that AI paves the way for a sustainable energy future. The use of AI is increasing the availability and efficiency of renewable energy sources such as solar, wind, hydroelectric, and biomass which now account for approximately 30 percent of electricity generated worldwide1

The World Economic Forum underscores the role AI plays in the energy transition and estimates that every 1 percent additional efficiency in demand creates USD1.3 trillion in value between 2020 and 2050 due to reduced investment needs.2

Microsoft partners with organizations across the energy and resources sector on solutions to drive workforce transformation, improve operational efficiencies, accelerate net-zero, and increase energy innovation and growth opportunities. We work with customers and partners on:

Our customers in power and utilities, oil and gas, and mining are transforming their workforce and operations to achieve more with less. These innovators are using digital technologies, data analytics, and automation to improve efficiency, safety, and sustainability. Investments include upskilling their employees, fostering innovation, and collaborating with Microsoft to create value for their customers and stakeholders.

Several industry leaders are at the forefront of leveraging data and AI to accelerate the energy transition, including:

Our extensive, global partner ecosystem is fundamental to accelerating innovation across the energy sector.While technology is an enabler, collaboration is the true foundation for addressing the worlds complex energy challenges. Microsoft is actively working with partners SLB, Cognite, Bentley, and many others to accelerate ideation and the development and deployment of AI-driven, sustainable energy solutions. You can find out more about our partnerships in my June blog.

Last week we announced Microsofts vision to deliverCopilot, your everyday AI companionto help people and businesses be smarter, more creative, more productive, and more connected to the world around them. We believe that together with our customers and partners, Microsoft can help power your teams, businesses, and processes, to empower every person and every organization to do their very best work and to achieve more.

In the energy and resources industry, generative AI has the potential to create new solutions and optimize existing processes by enhancing predictive maintenance models which evaluate the current status of equipment and machinery, whether its a power line, trucks at a mining site, or offshore wind turbines. The AI models can proactively make predictions based on usage trends and consequently inform maintenance teams of potential equipment failures in advance which help energy companies optimize maintenance schedules, minimize equipment downtime, reduce costs, and ensure a safe and reliable energy supply.

AI and machine learning can be used to improve the security of energy grids by preventing cyberattacks before they happen by using data analytics to identify patterns in energy data that may be indicative of a breach. AI can also empower and enable field workers to identify high-risk tasks and help prevent serious injuries by analyzing large data sets on work sites, schedules, and historical incidents. AI models can be used to predict future supply chain information such as forecasting demand for specific products and optimizing inventory levels, and there are countless more examples around service desk scenarios, customer care and support, and internal knowledge assistants.

As AI technology continues to rapidly evolve, Microsoft is committed to the advancement of AI driven by ethical principles and making sureAI systems are developed responsiblyand in ways that maintain trust. Our AI solutions and technology development align withMicrosofts AI Principlesfairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountabilityalong with Microsofts Responsible AI Standard in partnership with responsible AI experts across the company.

I hope youre as excited as I am by the latest AI innovations across the energy and resources sector.

Explore how Microsoft is empowering the world to achieve more with AI.

1AI paves the way for a sustainable energy future, Journal of Petroleum Energy Future, February 2023.

2Artificial intelligence is critical enabler of the energy transition, Word Economic Forum in collaboration with BloombergNEF and Deutsche Energie-Agentur, September 2021.

Read this article:

The era of AI: Transformative AI solutions powering the energy and ... - Microsoft

Bumble Wants AI to Be ‘A Supercharger to Love and Relationships’ – CNET

At Code Conference 2023, Bumble CEO Whitney Wolfe Herd explained how the company wants to increase the use of artificial intelligence in its apps, from coaching users in dating and relationships to one day using AI as intelligent matchmakers to save users a lot of swiping.

Herd had first revealed that Bumble was investing in AI in a Bloomberg interview earlier this month and explained on stage at Code Conference in Dana Point, California, how AI could improve matches, save users time and even coach them.

"I really think about AI as a supercharger to love and relationships," Herd said. Bumble is thinking of using AI to help folks before meeting other users too, by alleviating worries that they'll be bad at dating. "We can actually leverage AI to train people to interact in a way that makes them feel positive so that they can get to the human," she said.

She clarified that Bumble doesn't want to replace humans with bots or have them fall in love with a digital partner (no Her situations, then). Bumble wants to integrate AI in a way that reduces the time from matching to meeting in person: "We are definitely not in the business of keeping you on your phone forever," Herd said.

In the future, Bumble could even use AI as a sort of matchmaker that uses preferred dating parameters and deal breakers -- values, ideal vacations and best ways to spend a weeknight, Herd gave as examples -- to swiftly sift through the massive amounts of potential daters and present only the most likely matches to users. Perhaps AI could even use image recognition to match restaurants and brands one person likes (presumably from profile photos) that other users like.

This could be bundled into something Herd acknowledged Bumble is currently working on: a more exclusive tier of service above Bumble Premium that will be priced higher and do a lot of the matchmaking for you.

"I hear from women friends of mine say, 'I don't have time to get on this dating app and swipe for an hour -- can Bumble just do it for me? I'll pay whatever you want," Herd said. While she didn't offer more details on pricing or availability, she said it would be an "AI-supercharged" version of the current products that "feels very curated, very selective."

Herd reiterated Bumble's commitment to protecting users as the company continues exploring AI, and it's developing a set of terms and conditions to clarify to users how it will use AI tools. Given the proliferation of AI abuse in (for instance) pornographic deepfake imitations, which Bumble has vowed to fight with tech and media partners, this disclosure fits Bumble and will be part of "a suite of other initiatives" in the realm of legislation, Herd said.

Read also:How Generative AI Helps Bring Big Design Ideas to Life

Editors' note: CNET is using an AI engine to help create some stories. For more, seethis post.

See original here:

Bumble Wants AI to Be 'A Supercharger to Love and Relationships' - CNET

‘Really up to them’: Faculty members take lead on AI syllabus … – Duke Chronicle

Duke Learning Innovation responded to concerns about the use of artificial intelligence to conduct academic dishonesty by creating a set of guidelines for faculty to consult as they design their courses.

As large language models such as OpenAIs ChatGPT become more accessible, professors worry that students may use these programs to draft papers, solve math equations and complete other assignments. Last year, before the University released official guidance on the use of AI in the classroom, some faculty members opted to change their courses in response, while others didnt believe it was necessary to make changes just yet.

Now, universities across the country, including many of Dukes peers, have released guidance on how professors can address AIs use in the classroom. Dukes own guidelines recognize that it is up to each professor to determine whether they will allow AI to be used in their courses.

Some faculty members, like Professor of Statistical Science Jerry Reiter, have made changes to their syllabi for the fall semester to address the use of AI. Reiter does not allow students to use AI to complete assignments in his course, Statistical Science 322/522, Design of Surveys and Causal Studies.

Students need to sit and struggle with the problems in order to get the fullest conceptual understanding, something that can not be achieved by simply plugging an equation into AI, he said.

I try to provide a lot of office hours and TA office hours and help for students who struggle so that they can get those questions answered and hopefully not have to turn to the AI for help, Reiter said. For me, it's really about, how can I set up my course so that students get the most out of it?

Students cannot use AI in a manner that violates the Duke Community Standard, which considers using, consulting and/or maintaining unauthorized shared resources including, but not limited to, test banks, solutions materials and/or artificial intelligence as a form of cheating.

Denise Comer, professor of the practice and director of the Thompson Writing Program, also stressed the importance of providing students with additional resources for classes where the use of AI is prohibited. She highlighted the Thompson Writing Studio as a useful resource for writers at any stage of their work.

You might be shortchanging your own education and development and growth by taking unauthorized shortcuts or by engaging in questionable ethical decisions, she said. If students are thinking of making an unethical choice that's against the policy on the syllabus, [the next step] might be to recognize that writing is thinking, and when we engage ourselves as humans in the writing process, we're actually thinking through ideas and developing perspectives.

Comer also said she appreciates that Duke Learning Innovation acknowledges the benefits of AI in academia, alongside its drawbacks.

Her colleague, Xiao Tan, a lecturer in the Thompson Writing Program, received funding from the Pellets Foundation to license generative AI that allows her students to create photographic essays with AI-generated images.

Some of my colleagues in the writing program are also using generative AI to offer opportunities for students to think really deeply about various aspects of writing, such as revision, Comer said.

Both Reiter and Comer said they believe that the guidelines have validated the perspectives of individual professors and encouraged them to take the lead on how AI should be used.

Reiter said he appreciates that Duke is giving faculty both freedom and guidance to make their own decisions about the role of AI in their courses.

Faculty should have ownership of their course design and how the learning outcomes are addressed throughout the course, Comer said. Its really up to them.

Signup for our weekly newsletter. Cancel at any time.

Continued here:

'Really up to them': Faculty members take lead on AI syllabus ... - Duke Chronicle

Mayo Clinic to deploy and test Microsoft generative AI tools – Stories – Microsoft

ROCHESTER, Minn., and REDMOND, Wash. Sept. 28, 2023 Mayo Clinic, a world leader in healthcare known for its commitment to innovation, is among the first healthcare organizations to deploy Microsoft 365 Copilot. This new generative AI service combines the power of large language models (LLMs) with organizational data from Microsoft 365 to enable new levels of productivity in the enterprise.

Mayo Clinic is testing the Microsoft 365 Copilot Early Access Program with hundreds of its clinical staff, doctors and healthcare workers.

Microsoft 365 Copilot has the ability to transform work across virtually every industry so people can focus on the most important work and help move their organizations forward, said Colette Stallbaumer, general manager, Microsoft 365. Were excited to be helping customers like Mayo Clinic achieve their goals.

Generative AI has the potential to support Mayo Clinics vision to transform healthcare. For example, generative AI can help doctors automate form-filling tasks. Administrative demands continue to burden healthcare providers, taking up valuable time that could be used to provide more focused care to patients. Microsoft 365 Copilot has the potential to give healthcare providers valuable time back by automating tasks.

Mayo Clinic is one of the first to start working with Copilot tools to enable staff experience across apps like Microsoft Outlook, Word, Excel and more. Microsoft 365 Copilot combines the power of LLMs with data in the Microsoft 365 apps, including calendars, emails, chats, documents and meeting transcripts, to turn words into a powerful productivity tool.

Privacy, ethics and safety are at the forefront of Mayo Clinics work with generative AI and large language models, said Cris Ross, chief information officer at Mayo Clinic. Using AI-powered tech will enhance Mayo Clinics ability to lead the transformation of healthcare while focusing on what matters most providing the best possible care to our patients.

As a leader in healthcare, Mayo Clinic is always looking for new ways to improve patient care. By using generative AI and LLMs, Mayo Clinic will be able to offer its teams new timesaving tools to help them succeed.

About Mayo Clinic

Mayo Clinic is a nonprofit organization committed to innovation in clinical practice, education and research, and providing compassion, expertise and answers to everyone who needs healing. Visit theMayo Clinic News Network for additional Mayo Clinic news.

About Microsoft

Microsoft (Nasdaq MSFT @microsoft) enables digital transformation for the era of an intelligent cloud and an intelligent edge. Its mission is to empower every person and every organization on the planet to achieve more.

For more information, press only:

Microsoft Media Relations, WE Communications for Microsoft, (425) 638-7777,[emailprotected]

Samiha Khanna, Mayo Clinic, (507) 266-2624, [emailprotected]

Note to editors: For more information, news and perspectives from Microsoft, please visit the Microsoft News Center athttp://news.microsoft.com. Web links, telephone numbers and titles were correct at time of publication but may have changed. For additional assistance, journalists and analysts may contact Microsofts Rapid Response Team or other appropriate contacts listed athttps://news.microsoft.com/microsoft-public-relations-contacts.

Read this article:

Mayo Clinic to deploy and test Microsoft generative AI tools - Stories - Microsoft

Opinion | Elon Musk, Geoff Hinton, and the War Over A.I. – The New York Times

There is no shortage of researchers and industry titans willing to warn us about the potential destructive power of artificial intelligence. Reading the headlines, one would hope that the rapid gains in A.I. technology have also brought forth a unifying realization of the risks and the steps we need to take to mitigate them.

The reality, unfortunately, is quite different. Beneath almost all of the testimony, the manifestoes, the blog posts and the public declarations issued about A.I. are battles among deeply divided factions. Some are concerned about far-future risks that sound like science fiction. Some are genuinely alarmed by the practical problems that chatbots and deepfake video generators are creating right now. Some are motivated by potential business revenue, others by national security concerns.

The result is a cacophony of coded language, contradictory views and provocative policy demands that are undermining our ability to grapple with a technology destined to drive the future of politics, our economy and even our daily lives.

These factions are in dialogue not only with the public but also with one another. Sometimes, they trade letters, opinion essays or social threads outlining their positions and attacking others in public view. More often, they tout their viewpoints without acknowledging alternatives, leaving the impression that their enlightened perspective is the inevitable lens through which to view A.I. But if lawmakers and the public fail to recognize the subtext of their arguments, they risk missing the real consequences of our possible regulatory and cultural paths forward.

To understand the fight and the impact it may have on our shared future, look past the immediate claims and actions of the players to the greater implications of their points of view. When you do, youll realize this isnt really a debate only about A.I. Its also a contest about control and power, about how resources should be distributed and who should be held accountable.

Beneath this roiling discord is a true fight over the future of society. Should we focus on avoiding the dystopia of mass unemployment, a world where China is the dominant superpower or a society where the worst prejudices of humanity are embodied in opaque algorithms that control our lives? Should we listen to wealthy futurists who discount the importance of climate change because theyre already thinking ahead to colonies on Mars? It is critical that we begin to recognize the ideologies driving what we are being told. Resolving the fracas requires us to see through the specter of A.I. to stay true to the humanity of our values.

One way to decode the motives behind the various declarations is through their language. Because language itself is part of their battleground, the different A.I. camps tend not to use the same words to describe their positions. One faction describes the dangers posed by A.I. through the framework of safety, another through ethics or integrity, yet another through security and others through economics. By decoding who is speaking and how A.I. is being described, we can explore where these groups differ and what drives their views.

The loudest perspective is a frightening, dystopian vision in which A.I. poses an existential risk to humankind, capable of wiping out all life on Earth. A.I., in this vision, emerges as a godlike, superintelligent, ungovernable entity capable of controlling everything. A.I. could destroy humanity or pose a risk on par with nukes. If were not careful, it could kill everyone or enslave humanity. Its likened to monsters like the Lovecraftian shoggoths, artificial servants that rebelled against their creators, or paper clip maximizers that consume all of Earths resources in a single-minded pursuit of their programmed goal. It sounds like science fiction, but these people are serious, and they mean the words they use.

These are the A.I. safety people, and their ranks include the Godfathers of A.I., Geoff Hinton and Yoshua Bengio. For many years, these leading lights battled critics who doubted that a computer could ever mimic capabilities of the human mind. Having steamrollered the public conversation by creating large language models like ChatGPT and other A.I. tools capable of increasingly impressive feats, they appear deeply invested in the idea that there is no limit to what their creations will be able to accomplish.

This doomsaying is boosted by a class of tech elite that has enormous power to shape the conversation. And some in this group are animated by the radical effective altruism movement and the associated cause of long-term-ism, which tend to focus on the most extreme catastrophic risks and emphasize the far-future consequences of our actions. These philosophies are hot among the cryptocurrency crowd, like the disgraced former billionaire Sam Bankman-Fried, who at one time possessed sudden wealth in search of a cause.

Reasonable sounding on their face, these ideas can become dangerous if stretched to their logical extremes. A dogmatic long-termer would willingly sacrifice the well-being of people today to stave off a prophesied extinction event like A.I. enslavement.

Many doomsayers say they are acting rationally, but their hype about hypothetical existential risks amounts to making a misguided bet with our future. In the name of long-term-ism, Elon Musk reportedly believes that our society needs to encourage reproduction among those with the greatest culture and intelligence (namely, his ultrarich buddies). And he wants to go further, such as limiting the right to vote to parents and even populating Mars. Its widely believed that Jaan Tallinn, the wealthy long-termer who co-founded the most prominent centers for the study of A.I. safety, has made dismissive noises about climate change because he thinks that it pales in comparison with far-future unknown unknowns like risks from A.I. The technology historian David C. Brock calls these fears wishful worries that is, problems that it would be nice to have, in contrast to the actual agonies of the present.

More practically, many of the researchers in this group are proceeding full steam ahead in developing A.I., demonstrating how unrealistic it is to simply hit pause on technological development. But the roboticist Rodney Brooks has pointed out that we will see the existential risks coming, the dangers will not be sudden and we will have time to change course. While we shouldnt dismiss the Hollywood nightmare scenarios out of hand, we must balance them with the potential benefits of A.I. and, most important, not allow them to strategically distract from more immediate concerns. Lets not let apocalyptic prognostications overwhelm us and smother the momentum we need to develop critical guardrails.

While the doomsayer faction focuses on the far-off future, its most prominent opponents are focused on the here and now. We agree with this group that theres plenty already happening to cause concern: Racist policing and legal systems that disproportionately arrest and punish people of color. Sexist labor systems that rate feminine-coded rsums lower. Superpower nations automating military interventions as tools of imperialism and, someday, killer robots.

The alternative to the end-of-the-world, existential risk narrative is a distressingly familiar vision of dystopia: a society in which humanitys worst instincts are encoded into and enforced by machines. The doomsayers think A.I. enslavement looks like the Matrix; the reformers point to modern-day contractors doing traumatic work at low pay for OpenAI in Kenya.

Propagators of these A.I. ethics concerns like Meredith Broussard, Safiya Umoja Noble, Rumman Chowdhury and Cathy ONeil have been raising the alarm on inequities coded into A.I. for years. Although we dont have a census, its noticeable that many leaders in this cohort are people of color, women and people who identify as L.G.B.T.Q. They are often motivated by insight into what it feels like to be on the wrong end of algorithmic oppression and by a connection to the communities most vulnerable to the misuse of new technology. Many in this group take an explicitly social perspective: When Joy Buolamwini founded an organization to fight for equitable A.I., she called it the Algorithmic Justice League. Ruha Benjamin called her organization the Ida B. Wells Just Data Lab.

Others frame efforts to reform A.I. in terms of integrity, calling for Big Tech to adhere to an oath to consider the benefit of the broader public alongside or even above their self-interest. They point to social media companies failure to control hate speech or how online misinformation can undermine democratic elections. Adding urgency for this group is that the very companies driving the A.I. revolution have, at times, been eliminating safeguards. A signal moment came when Timnit Gebru, a co-leader of Googles A.I. ethics team, was dismissed for pointing out the risks of developing ever-larger A.I. language models.

While doomsayers and reformers share the concern that A.I. must align with human interests, reformers tend to push back hard against the doomsayers focus on the distant future. They want to wrestle the attention of regulators and advocates back toward present-day harms that are exacerbated by A.I. misinformation, surveillance and inequity. Integrity experts call for the development of responsible A.I., for civic education to ensure A.I. literacy and for keeping humans front and center in A.I. systems.

This groups concerns are well documented and urgent and far older than modern A.I. technologies. Surely, we are a civilization big enough to tackle more than one problem at a time; even those worried that A.I. might kill us in the future should still demand that it not profile and exploit us in the present.

Other groups of prognosticators cast the rise of A.I. through the language of competitiveness and national security. One version has a post-9/11 ring to it a world where terrorists, criminals and psychopaths have unfettered access to technologies of mass destruction. Another version is a Cold War narrative of the United States losing an A.I. arms race with China and its surveillance-rich society.

Some arguing from this perspective are acting on genuine national security concerns, and others have a simple motivation: money. These perspectives serve the interests of American tech tycoons as well as the government agencies and defense contractors they are intertwined with.

OpenAIs Sam Altman and Metas Mark Zuckerberg, both of whom lead dominant A.I. companies, are pushing for A.I. regulations that they say will protect us from criminals and terrorists. Such regulations would be expensive to comply with and are likely to preserve the market position of leading A.I. companies while restricting competition from start-ups. In the lobbying battles over Europes trailblazing A.I. regulatory framework, U.S. megacompanies pleaded to exempt their general purpose A.I. from the tightest regulations, and whether and how to apply high-risk compliance expectations on noncorporate open-source models emerged as a key point of debate. All the while, some of the moguls investing in upstart companies are fighting the regulatory tide. The Inflection AI co-founder Reid Hoffman argued, The answer to our challenges is not to slow down technology but to accelerate it.

Any technology critical to national defense usually has an easier time avoiding oversight, regulation and limitations on profit. Any readiness gap in our military demands urgent budget increases, funds distributed to the military branches and their contractors, because we may soon be called upon to fight. Tech moguls like Googles former chief executive Eric Schmidt, who has the ear of many lawmakers, signal to American policymakers about the Chinese threat even as they invest in U.S. national security concerns.

The warriors narrative seems to misrepresent that science and engineering are different from what they were during the mid-20th century. A.I. research is fundamentally international; no one country will win a monopoly. And while national security is important to consider, we must also be mindful of self-interest of those positioned to benefit financially.

As the science-fiction author Ted Chiang has said, fears about the existential risks of A.I. are really fears about the threat of uncontrolled capitalism, and dystopias like the paper clip maximizer are just caricatures of every start-ups business plan. Cosma Shalizi and Henry Farrell further argue that weve lived among shoggoths for centuries, tending to them as though they were our masters as monopolistic platforms devour and exploit the totality of humanitys labor and ingenuity for their own interests. This dread applies as much to our future with A.I. as it does to our past and present with corporations.

Regulatory solutions do not need to reinvent the wheel. Instead, we need to double down on the rules that we know limit corporate power. We need to get more serious about establishing good and effective governance on all the issues we lost track of while we were becoming obsessed with A.I., China and the fights picked among robber barons.

By analogy to the health care sector, we need an A.I. public option to truly keep A.I. companies in check. A publicly directed A.I. development project would serve to counterbalance for-profit corporate A.I. and help ensure an even playing field for access to the 21st centurys key technology while offering a platform for the ethical development and use of A.I.

Also, we should embrace the humanity behind A.I. We can hold founders and corporations accountable by mandating greater A.I. transparency in the development stage, in addition to applying legal standards for actions associated with A.I. Remarkably, this is something that both the left and the right can agree on.

Ultimately, we need to make sure the network of laws and regulations that govern our collective behavior is knit more strongly, with fewer gaps and greater ability to hold the powerful accountable, particularly in those areas most sensitive to our democracy and environment. As those with power and privilege seem poised to harness A.I. to accumulate much more or pursue extreme ideologies, lets think about how we can constrain their influence in the public square rather than cede our attention to their most bombastic nightmare visions for the future.

More:

Opinion | Elon Musk, Geoff Hinton, and the War Over A.I. - The New York Times