Archive for the ‘Ai’ Category

This is how I plan to explain AI PC to my confused friends and relatives – TechRadar

It occurred to me this morning that I will soon be explaining to a friend or relative what an AI PC is and what theyre meant to do with it.

The answer seems obvious to me because Ive been covering AI and PCs for decades. But as I try to articulate the meaning, I stumble:

None of that comes close to capturing it. What makes more sense is this:

A PC that works the way they promised it would when we first started computing.

Instead of a dense box full of information, memories, and apps that can go through it all, its a wonder box that anticipates your intentions, takes actions on your behalf, and never leaves you wondering, How do I do that?

Granted the AI PCs youll see this summer are still not quite that. However, there will be hints of that power and potential.

Microsofts one-button Copilot access across the new Surface Windows PCs it builds, and myriad partner laptops and desktops are not just marketing stunts. The Copilot button might initially be considered a when all else fails button. You hit it, and Copilot might rescue you because it lets you ask your question in a way that makes sense to you. An AI PC will know itself as you know yourself. It will know more about the computes inner workings, settings, and AI-compliant apps than you do and might not make you wade through apps, settings, and menus to get results.

Sign up for breaking news, reviews, opinion, top tech deals, and more.

Any applications menu system is a developers best guess at the intentions of millions of users, and when you try to satisfy everyone, you usually satisfy no one.

AI-integrated machines will outstrip the rudimentary intelligence of your average PC and apps with something approaching human reasoning. This act could make your PC like the digital partner you always wanted. Unlike tiny AI-infused gadgets like Rabbit R1 and Humane AI, they wont insist you learn a new usage paradigm. These AI PCs look like your old PCs, which means you use them as you want, in whatever way makes you happy, and tap into that new AI superpower on an as-needed basis.

If my friends and relatives also ask about how the PC can be so smart, where all that intelligence resides, and if every question they ask ends up in the hands of a third party, thats when the conversation might get a little more complicated.

Breaking down this complex issue, Id explain that most AI PCs will take a half-and-half approach. Some intelligence will be right there, in the brand new AI Brain or NPU, but the rest could reside on cloud servers owned by Microsoft, Apple, or even Google. Choosing your new AI PC will come down to who you trust to keep your queries private.

Im also pretty sure this explanation will hold up next month when Apple introduces its own AI Macs (theyre also PCs, by the way)

Yeah, this is what Ill say if someone asks me.

Read this article:

This is how I plan to explain AI PC to my confused friends and relatives - TechRadar

Releasing a new paper on openness and artificial intelligence – Mozilla & Firefox

For the past six months, the Columbia Institute of Global Politics and Mozilla have been working with leading AI scholars and practitioners to create a framework on openness and AI. Today, we are publishing a paper that lays out this new framework.

During earlier eras of the internet, open source technologies played a core role in promoting innovation and safety. Open source technology provided a core set of building blocks that software developers have used to do everything from create art to design vaccines to develop apps that are used by people all over the world; it is estimated that open source software is worth over $8 trillion in value. And, attempts to limit open innovation such as export controls on encryption in early web browsers ended up being counterproductive, further exemplifying the value of openness.

The paper surveys existing approaches to defining openness in AI models and systems, and then proposes a descriptive framework to understand how each component of the foundation model stack contributes to openness.

Today, open source approaches for artificial intelligence and especially for foundation models offer the promise of similar benefits to society. However, defining and empowering open source for foundation models has proven tricky, given its significant differences from traditional software development. This lack of clarity has made it harder to recommend specific approaches and standards for how developers should advance openness and unlock its benefits. Additionally, these conversations about openness in AI have often operated at a high level, making it harder to reason about the benefits and risks from openness in AI. Some policymakers and advocates have blamed open access to AI as the source of certain safety and security risks, often without concrete or rigorous evidence to justify those claims. On the other hand, people often tout the benefits of openness in AI, but without specificity about how to actually harness those opportunities.

Thats why, in February, Mozilla and the Columbia Institute of Global Politics brought together over 40 leading scholars and practitioners working on openness and AI for the Columbia Convening. These individuals spanning prominent open source AI startups and companies, nonprofit AI labs, and civil society organizations focused on exploring what open should mean in the AI era.

Today, we are publishing a paper that presents a framework for grappling with openness across the AI stack. The paper surveys existing approaches to defining openness in AI models and systems, and then proposes a descriptive framework to understand how each component of the foundationmodel stack contributes to openness. It enables without prescribing an analysis of how to unlock specific benefits from AI, based on desired model and system attributes. Furthermore, the paper also adds clarity to support further work on this topic, including work to develop stronger safety safeguards for open systems.

We believe this framework will support timely conversations around the technical and policy communities. For example, this week, as policymakers discuss AI policy at the AI Seoul Summit 2024, this framework can help clarify how openness in AI can support societal and political goals, including innovation, safety, competition, and human rights. And, as the technical community continues to build and deploy AI systems, this framework can support AI developers in ensuring their AI systems help achieve their intended goals, promote innovation and collaboration, and reduce harms. We look forward to working with the open source and AI community, as well as the policy and technical communities more broadly, to continue building on this framework going forward.

Read more from the original source:

Releasing a new paper on openness and artificial intelligence - Mozilla & Firefox

Cooler Master introduces colored ‘AI Thermal Paste’ CryoFuze 5 comes with nano-diamond technology – Tom’s Hardware

Cooler Master just released a new line of CryoFuze 5 'AI Thermal Paste' that comes in six different colors. The company uses zinc oxide and aluminum powder to make the colorful thermal paste, while also claiming that it uses 'nano-molecular technology' to deliver stable performance.

While the added colors are likely just a gimmick or for creators filming their PC builds, the bigger claim here is the thermal pastes performance and stability across a wide range of temperatures. According to the CryoFuze 5 China product page, the thermal paste has a thermal conductivity coefficient of 12.6 W/mK, giving it better performance than all other thermal pastes weve tested in our Best Thermal Paste for 2024 guide, save for the SYY 157 that has a rating of 15.7 W/mK. It won't match the values you can get from liquid metal thermal pastes, however, which offer thermal conductivity ratings of 73 W/mK or higher.

Image 1 of 2

Cooler Master uses the AI branding on CryoFuze 5, but there is nothing AI about a thermal paste solution. While perhaps Cooler Master could've designed it for AI processors, especially as next-generation AI chips like Intels Falcon Shores and Nvidias B100 and B200 GPUs have TDPs higher than 1,000 watts, the CryoFuze 5's thermal performance isnt that far ahead of its competitors.

The CryoFuze 5 might not mean much for the average PC builder. But enthusiasts looking for style points on their video builds might love it (even though no one will ever see it again once the PC is assembled, unless they take the CPU cooler off). This also isnt the first colored thermal paste from Cooler Master, as it already sells the CryoFuze Violet thermal grease.

More importantly, the CryoFuze 5s high thermal conductivity (for a thermal paste) allows overclockers to push high-performance silicon even more. This is particularly crucial for builders using more exotic solutions, like using the EKWB AIO liquid-cooler designed for delidded CPUs, or those who replace the stock heat spreader on the processor with a custom one from Thermal Grizzly.

The stability of Cooler Masters colorful thermal paste adds another advantage, especially for overclockers who aim to get the most out of their silicon. If youre one of the few who use liquid nitrogen to cool your PC, you'll appreciate the CryoFuze 5's ability to work from -50C to 240C.

Liquid metal should still perform better than the CryoFuze 5, but it comes with the added risk of shorting components as it's a conductive material. While the color options and AI branding are likely just for marketing purposes, its improved performance should help enthusiasts looking to redline their systems.

Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.

Read the original:

Cooler Master introduces colored 'AI Thermal Paste' CryoFuze 5 comes with nano-diamond technology - Tom's Hardware

Wisconsin man arrested for allegedly creating AI-generated child sexual abuse material – The Verge

A Wisconsin software engineer was arrested on Monday for allegedly creating and distributing thousands of AI-generated images of child sexual abuse material (CSAM).

Court documents describe Steven Anderegg as extremely technologically savvy, with a background in computer science and decades of experience in software engineering. Anderegg, 42, is accused of sending AI-generated images of naked minors to a 15-year-old boy via Instagram DM. Anderegg was put on law enforcements radar after the National Center for Missing & Exploited Children flagged the messages, which he allegedly sent in October 2023.

According to information law enforcement obtained from Instagram, Anderegg posted an Instagram story in 2023 consisting of a realistic GenAI image of minors wearing BDSM-themed leather clothes and encouraged others to check out what they were missing on Telegram. In private messages with other Instagram users, Anderegg allegedly discussed his desire to have sex with prepubescent boys and told one Instagram user that he had tons of other AI-generated CSAM images on his Telegram.

Anderegg allegedly began sending these images to another Instagram user after learning he was only 15 years old. When this minor made his age known, the defendant did not rebuff him or inquire further. Instead, he wasted no time in describing to this minor how he creates sexually explicit GenAI images and sent the child custom-tailored content, charging documents claim.

When law enforcement searched Andereggs computer, they found over 13,000 images with hundreds if not thousands of these images depicting nude or semi-clothed prepubescent minors, according to prosecutors. Charging documents say Anderegg made the images on the text-to-image model Stable Diffusion, a product created by Stability AI, and used extremely specific and explicit prompts to create these images. Anderegg also allegedly used negative prompts to avoid creating images depicting adults and used third-party Stable Diffusion add-ons that specialized in producing genitalia.

Last month, several major tech companies including Google,Meta, OpenAI, Microsoft, and Amazon said theyd review their AI training data for CSAM. The companies committed to a new set of principles that include stress-testing models to ensure they arent creating CSAM. Stability AI also signed on to the principles.

According to prosecutors, this is not the first time Anderegg has come into contact with law enforcement over his alleged possession of CSAM via a peer-to-peer network. In 2020, someone using the internet in Andereggs Wisconsin home tried to download multiple files of known CSAM, prosecutors claim. Law enforcement searched his home in 2020, and Anderegg admitted to having a peer-to-peer network on his computer and frequently resetting his modem, but he was not charged.

In a brief supporting Andereggs pretrial detention, the government noted that hes worked as a software engineer for more than 20 years, and his CV includes a recent job at a startup, where he used his excellent technical understanding in formulating AI models.

If convicted, Anderegg faces up to 70 years in prison, though prosecutors say the recommended sentencing range may be as high as life imprisonment.

The rest is here:

Wisconsin man arrested for allegedly creating AI-generated child sexual abuse material - The Verge

Beyond keywords: AI-driven approaches to improve data discoverability – World Bank

This blog is part of AI for Data, Data for AI, a series aiming to unwrap, explain and foster the intersection of artificial intelligence and data. This post is the third installment of the seriesfor further reading, here are the first and second installments.

Data is essential for generating knowledge and informing policies. Organizations that produce large volumes of diverse data face challenges in managing and disseminating it effectively. One major challenge is ensuring users can easily find the most relevant data for their needs, a problem known as data discoverability.

Organizations like the World Bank have systems to make their data assets discoverable. Traditionally, these systems use lexical or keyword search applications, indexing available metadata to enable data discovery through search terms. However, this approach limits discovery to the keywords in the accompanying metadata documentation, returning nothing beyond those terms.

Artificial intelligence (AI), primarily large language models (LLMs), can enhance data systems to ensure relevant and timely data are discoverable. With richer metadata and taking advantage of AI-enabled solutions, semantic search, hybrid search, knowledge graphs, and recommendation systems can be utilized.

In this post, we explore how simple AI applications can overcome the limitations of keyword-based search. We also discuss AI-enabled techniques that improve our understanding of users' information needs, leading to a better data search experience.

More:

Beyond keywords: AI-driven approaches to improve data discoverability - World Bank