Archive for the ‘Ai’ Category

Transformers: the Google scientists who pioneered an AI revolution – Financial Times

What is included in my trial?

During your trial you will have complete digital access to FT.com with everything in both of our Standard Digital and Premium Digital packages.

Standard Digital includes access to a wealth of global news, analysis and expert opinion. Premium Digital includes access to our premier business column, Lex, as well as 15 curated newsletters covering key business themes with original, in-depth reporting. For a full comparison of Standard and Premium Digital, click here.

Change the plan you will roll onto at any time during your trial by visiting the Settings & Account section.

If you do nothing, you will be auto-enrolled in our premium digital monthly subscription plan and retain complete access for $69 per month.

For cost savings, you can change your plan at any time online in the Settings & Account section. If youd like to retain your premium access and save 20%, you can opt to pay annually at the end of the trial.

You may also opt to downgrade to Standard Digital, a robust journalistic offering that fulfils many users needs. Compare Standard and Premium Digital here.

Any changes made can be done at any time and will become effective at the end of the trial period, allowing you to retain full access for 4 weeks, even if you downgrade or cancel.

You may change or cancel your subscription or trial at any time online. Simply log into Settings & Account and select "Cancel" on the right-hand side.

You can still enjoy your subscription until the end of your current billing period.

We support credit card, debit card and PayPal payments.

Read the original here:

Transformers: the Google scientists who pioneered an AI revolution - Financial Times

FACT SHEET: Biden-Harris Administration Secures Voluntary … – The White House

Voluntary commitments underscoring safety, security, and trust mark a critical step toward developing responsible AI

Biden-Harris Administration will continue to take decisive action by developing an Executive Order and pursuing bipartisan legislation to keep Americans safe

Since taking office, President Biden, Vice President Harris, and the entire Biden-Harris Administration have moved with urgency to seize the tremendous promise and manage the risks posed by Artificial Intelligence (AI) and to protect Americans rights and safety. As part of this commitment, President Biden is convening seven leading AI companies at the White House today Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI to announce that the Biden-Harris Administration has secured voluntary commitments from these companies to help move toward safe, secure, and transparent development of AI technology.

Companies that are developing these emerging technologies have a responsibility to ensure their products are safe. To make the most of AIs potential, the Biden-Harris Administration is encouraging this industry to uphold the highest standards to ensure that innovation doesnt come at the expense of Americans rights and safety.

These commitments, which the companies have chosen to undertake immediately, underscore three principles that must be fundamental to the future of AI safety, security, and trust and mark a critical step toward developing responsible AI. As the pace of innovation continues to accelerate, the Biden-Harris Administration will continue to remind these companies of their responsibilities and take decisive action to keep Americans safe.

There is much more work underway. The Biden-Harris Administration is currently developing an executive order and will pursue bipartisan legislation to help America lead the way in responsible innovation.

Today, these seven leading AI companies are committing to:

Ensuring Products are Safe Before Introducing Them to the Public

Building Systems that Put Security First

Earning the Publics Trust

As we advance this agenda at home, the Administration will work with allies and partners to establish a strong international framework to govern the development and use of AI. It has already consulted on the voluntary commitments with Australia, Brazil, Canada, Chile, France, Germany, India, Israel, Italy, Japan, Kenya, Mexico, the Netherlands, New Zealand, Nigeria, the Philippines, Singapore, South Korea, the UAE, and the UK. The United States seeks to ensure that these commitments support and complement Japans leadership of the G-7 Hiroshima Processas a critical forum for developing shared principles for the governance of AIas well as the United Kingdoms leadership in hosting a Summit on AI Safety, and Indias leadership as Chair of the Global Partnership on AI.We also are discussing AI with the UN and Member States in various UN fora.

Todays announcement is part of a broader commitment by the Biden-Harris Administration to ensure AI is developed safely and responsibly, and to protect Americans from harm and discrimination.

###

See the article here:

FACT SHEET: Biden-Harris Administration Secures Voluntary ... - The White House

Remarks by President Biden on Artificial Intelligence – The White House

Roosevelt Room

1:18 P.M. EDT

THE PRESIDENT: Im the AI. (Laughter.) If any of you think Im Abe Lincoln, blame it on the AI.

First of all, thanks. Thanks for coming. And I want to thank my colleagues here for taking the time to come back again and again as we try to deal with the were joined by leaders of seven American companies who are driving innovation in artificial intelligence. And it is astounding.

Artificial intelligence or it promises an enormous an enormous promise of both risk to our society and our economy and our national security, but also incredible opportunities incredible opportunities.

Just two months ago, Kamala and I met with these leaders most of them are here again to underscore the responsibility of making sure that products that they are producing are safe and and making them public what they are and what they arent.

Since then, Ive met with some of Americas top minds in technology to hear the range of perspectives and possibilities and risk of AI.

Kamala cant be here because shes traveling to Florida, but shes met with civil society leaders to hear their concerns about the impacts on society and ways to protect the rights of Americans.

Over the past year, my administration has taken action to guide responsible innovation.

Last October, we introduced a first-of-its-kind AI Bill of Rights.

In February, I signed an executive order to direct agencies to protect the public from algorithms that discriminate.

In May, we unveiled a new strategy to establish seven new AI research institutes to help drive breakthroughs in responsible AI innovention [innovation].

And today, Im pleased to announce that these seven companies have agreed volun- to voluntary commitments for responsible innovation. These commitments, which the companies will implement immediately, underscore three fundamental principles: safety, security, and trust.

First, the companies have an obligation to make sure their technology is safe before releasing it to the public. That means testing the capabilities of their systems, assessing their potential risk, and making the results of these assessments public.

Second, companies must prioritize the security of their systems by safeguarding their models against cyber threats and managing the risks to our national security and sharing the best practices and industry standards that are that are necessary.

Third, the companies have a duty to earn the peoples trust and empower users to make informed decisions labeling content that has been altered or AI-generated, rooting out bias and discrimination, strengthening privacy protections, and shielding children from harm.

And finally, companies have agreed to find ways for AI to help meet societys greatest challenges from cancer to climate change and invest in education and new jobs to help students and workers prosper from the opportunities, and there are enormous opportunities of AI.

These commitments are real, and theyre concrete. Theyre going to help fulfill the industry fulfill its fundamental obligation to Americans to develop safe, secure, and trustworthy technologies that benefit society and uphold our values and our shared values.

Let me close with this. Well see more technology change in the next 10 years, or even in the next few years, than weve seen in the last 50 years. That has been an astounding revelation to me, quite frankly. Artificial intelligence is going to transform the lives of people around the world.

The group here will be critical in shepherding that innovation with responsibility and safety by design to earn the trust of Americans. And, quite frankly, as I met with world leaders, all all all our Eur- all the G7 is focusing on the same thing.

Social media has shown us the harm that powerful technology can do without the right safeguards in place.

And Ive said at the State of the Union that Congress needs to pass bipartisan legislation to impose strict limits on personal data collection, ban targeted advertisements to kids, require companies to put health and safety first.

But we must be clear-eyed and vigilant about the threats emerging of emerging technologies that can pose dont have to, but can pose to our democracy and our values.

Americans are seeing how advanced artificial intelligence and the pace of innovation have the power to disrupt jobs and industries.

These commitments these commitments are a promising step, but the we have a lot more work to do together.

Realizing the promise of AI by managing the risk is going to require some new laws, regulations, and oversight.

In the weeks ahead, Im going to continue to take executive action to help America lead the way toward responsible innovation. And were going to work with both parties to develop appropriate legislation and regulation. Im pleased that Leader Schumer and Leader Jeffries and others in the Congress are making this a top bipartisan priority.

As we advance the agenda here at home, well lead the work with well lead work with our allies and partners on a common international framework to govern the development of AI.

I think these leaders and I thank these leaders that are in the room with me today (clears throat) and their partnership excuse me and their commitments that theyre making. This is a serious responsibility, and we have to get it right. And theres enormous, enormous potential upside as well.

So I want to thank you all. And theyre about to go down to a meeting, which Ill catch up with them later.

So thank you, thank you, thank you.

1:24 P.M. EDT

View post:

Remarks by President Biden on Artificial Intelligence - The White House

Generative AI and Web3: Hyped nonsense or a match made in tech … – VentureBeat

Head over to our on-demand library to view sessions from VB Transform 2023. Register Here

Did I write this, or was it ChatGPT?

Its hard to tell, isnt it?

For the sake of my editors, I will follow that quickly with: I wrote this article (I swear). But the point is that its worth exploring generative artificial intelligences limitations and areas of utility for developers and users. Both are revealing. The same is true for Web3 and blockchain.

While were already seeing the practical applications of Web3 and generative AI play out in tech platforms, online interactions, scripts, games and social media apps, were also seeing a replay of the responsible AI and blockchain 1.0 hype cycles of the mid-2010s.

VB Transform 2023 On-Demand

Did you miss a session from VB Transform 2023? Register to access the on-demand library for all of our featured sessions.

We need a set of principles or ethics to guide innovation. We need more regulation. We need less regulation. There are bad actors poisoning the well for the rest of us. We need heroes to save us from AI and/or blockchain. Technology is too sentient. Technology is too limited. There is no enterprise-level application. There are countless enterprise-level applications.

If you exclusively read the headlines, you will come out the other side with the conclusion that the combo of generative AI and blockchain will either save the world or destroy it.

Weve seen this play (and every act and intermission) before with the hype cycles of both responsible AI and blockchain. The only difference this time is that the articles were reading about ChatGPTs implications may, in fact, have been written by ChatGPT. And the term blockchain has a bit more heft behind it thanks to investment from Web2 giants like Google Cloud, Mastercard and Starbucks.

That said, its notable that OpenAIs leadership recently called for an international regulatory body akin to the International Atomic Energy Agency (IAEA) to regulate and, when necessary, rein in AI innovation. The proactive move illuminates an awareness of both AIs massive potential and potentially society-crumbling pitfalls. It also conveys that the technology itself is still in test mode.

The other significant subtext: Public sector regulation at the federal and sub-federal levels commonly limits innovation.

As with Web3, and whether or not regulatory action takes place, responsibility needs to be at the core of generative AI innovation and adoption. As the technology evolves rapidly, its important for vendors and platforms to assess every potential use case to ensure responsible experimentation and adoption. And, as OpenAIs Sam Altman and Googles Sundar Pichai notably point out, working with the public sector to evolve regulation is a significant part of that equation.

Its also important to surface limitations, transparently report on them, and provide guardrails if or when issues become apparent.

While AI and blockchain have both been around for decades, the impact of AI, in particular, is now visible with ChatGPT, Bard and the entire field of generative AI players. Together with Web3s decentralized power, were about to witness an explosion of practical applications that build on progress automating interactions and advancing Web3 in more visible ways.

From a user-centric perspective (and whether we know it or not), generative AI and blockchain are both already transforming how people interact in the real world and online. Solana recently made it official with a ChatGPT integration. And exchange Bitget backed away from theirs.

Promising or puzzling, every signal indicates that it remains to be seen where the technologies best intersect in the name of user experience and user-centric innovation. From where I sit as the head of a layer1 blockchain built for scale and interoperability, the question becomes: How should AI and blockchain join forces in pursuit of Web3s own ChatGPT moment of mainstream adoption?

Tools like ChatGPT and Bard will accelerate the next major waves of innovation on Web2 and Web3. The convergence of generative AI and Web3 will be like the pairing of peanut butter and jelly on fresh bread but, you know, with code, infrastructure, and asset portability. And, as hype is replaced with practical applications and constant upgrades, persistent questions about whether these technologies will take hold in the mainstream will be toast.

Enterprise leaders should view generative AI as a tool worth exploring, testing, and after doing both, integrating. Specifically, they should focus efforts on exploring how the generative element can improve work outcomes internally with teams and externally with customers or partners. And they should continuously map out its enterprise-wide potential and limitations.

Its time to begin to map out and document where not to use generative AI, which is equally important in my book. Dont rely on the technology for anything where you need to apply facts and hard data to outputs for community members, partners, teams or investors, and dont rely on it for protocol upgrades, software engineering, coding sprints or international business operations.

On a practical level, enterprise leaders should consider incorporating generative AI into administrative workflows to keep their companys day-to-day workflows moving faster and more efficiently. Explore its seemingly universal utility to kick off text- or code-heavy projects across engineering, marketing, business and executive functions. And since this tech changes by the day, enterprise leaders should look at every possible new use case to decide whether to responsibly experiment with it en route to adoption, which also applies to work in Web3.

Mo Shaikh is cofounder and CEO of Aptos Labs.

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even considercontributing an articleof your own!

Read More From DataDecisionMakers

Excerpt from:

Generative AI and Web3: Hyped nonsense or a match made in tech ... - VentureBeat

AI fears are fueling the labor strikes in Hollywood – LEX 18 News – Lexington, KY

Artificial intelligenceis poised to be the next big thing in a bunch of different industries, including the entertainment sector. But the workers AI might one day replace are fighting for a say in how the technology is usedbefore it gets too big to stop.

Generative AI, meaning AI that can create text, images, and other content, can sometimes feel like a magic boxgive it a prompt, and itll spit out a more-or-less correct response that looks like its been written by a person.

The technologys ability to easily churn out human-quality work for cheap has many artists and writers worried. Artificial intelligence isnt going to replace screenwriters wholesale any time soon, but it could still undermine creative jobs by giving production studios a cheap way to underpay writers.

Bryan Sullivan is a lawyer who specializes in crisis management for the entertainment industry. He told Next Level, "I don't think people realized until recently thatwriters view AI as a threat.The studios could cut the first layer of writing out by using an AI system and then hiring a writer to do a polish, which is a lot less money."

SEE MORE: Hollywood and a history of strikes: How did they turn out?

The potential threat of AI is one issue behind the Writers Guild of Americas most recent strike. Part of the unions demands when they struck aimed to limit studios ability to use AI to cut costs on projects.

AI fears also motivated actors to walk off the job alongside the writers. The actors union, SAG-AFTRA, cited concerns that actors performances could be replicated by artificial intelligence as one justification for their strike.

Writers in Hollywood have already seen their contracts and opportunities shrink in the face of studios efforts to save money. In that environment, its hard not to look at AI in Hollywood as less of a creative engine and more of a cost-cutting measure.

Helen Silverstein is a video game writer and the co-chair of DSA-LA's Hollywood Labor Committee. She told Next Level:"So many writers who despite writing on Emmy award winning shows, are on food stamps or struggling, living paycheck to paycheck, struggling to survive. It is not just about writing or even just creativity at all. It's about working people being able to live, and create, and work, survive, and thrive."

There may not be a whole lot workers can do to protect themselves from being replaced; strikes only work when AI isnt developed enough to cross the picket line. Strikes and protests from workers might not change how the technology behind AI develops, but they can try to shape how its used by the profit-driven industries around them.

Trending stories at Scrippsnews.com

Read the original here:

AI fears are fueling the labor strikes in Hollywood - LEX 18 News - Lexington, KY