Archive for the ‘Artificial Intelligence’ Category

What is artificial general intelligence (AGI)? – Android Authority

The idea of an artificial intelligence system that can think and perform tasks like a human has existed for decades, but it hasnt completely come to fruition yet. While ChatGPT and similar chatbots can output text thats consistent with human thought, theyre still limited to recognizing and repeating patterns. They dont have any autonomy or ability to self-learn, improve, and solve never-seen-before problems. But some believe that were steadily marching towards AGI, artificial general intelligence, a hypothetical future where computers possess abilities that rival our own.

Language models like GPT-4 and Google Gemini can already talk, draw, and recognize images like a human, though, so what sets AGI apart from them? Lets break it down in this article on artificial general intelligence, or AGI.

AGI or artificial general intelligence is a hypothetical concept that describes a machine capable of human-like understanding and reasoning. You see, todays AI systems are highly reliant on their training data and typically fall flat when presented with brand-new scenarios outside of their limited expertise. For example, even the best language models like GPT-4 often make errors while solving college-level math and physics problems.

By contrast, an AGI would not be similarly bound to a single skill or knowledge set. Furthermore, it would use logical reasoning to overcome problems that it never encountered before. Put simply, were talking about a machine so sophisticated that its smarter than even the best human experts. Such an AI system could perhaps even train itself to become better over time.

Were still a ways off from realizing most AI researchers vision of AGI. However, we have seen efforts accelerate over the past couple of years. Within this short span of time, companies like OpenAI and Google have unveiled AI systems that can talk like a human, draw images, recognize objects, and a combination of all three. These abilities form the foundation of AGI, but were not quite there yet.

Calvin Wankhede / Android Authority

Heres a quick table that compares AI vs AGI. Keep in mind that AGI is a theoretical concept and not a definition set in stone, whereas AI systems already exist.

Intelligence level

Less intelligent than humans

As good or better than a human

Ability

Single purpose

Multi-purpose, can handle variety of scenarios

Training

Pre-trained, with the option of fine-tuning

Capable of continuously improving or training itself

Availability

Already exists

Doesn't exist yet

Examples

ChatGPT, Bing Chat, Google Bard

Still in development

You may also hear conventional artificial intelligence systems referred to as narrow AI. Likewise, AGI is often shortened to general or strong AI.

Its difficult to predict whether AGI is possible or not. According to some definitions of AGI, computers that can surpass our intelligence would be able to solve long-standing problems that humans havent found a way to overcome yet. In such a scenario, AGI would upend fields like medicine, biotechnology, and engineering practically overnight. Thats difficult to imagine, even as an optimist in AIs potential.

Plenty of researchers have raised moral and safety concerns over the development of AGI as well. Even if AGI only matches our intelligence, it could pose a threat to humanitys existence. While the situation may not turn out as bleak as some Hollywood doomsday depictions, weve already seen how current AI systems can deceive and mislead people. For example, in early 2023, Microsofts Bing Chat feigned convincing emotional connections with many users.

Google demo video

According to many AI researchers, were not far from AGI with predictions ranging between the years 2030 and 2050. Some even believe that were already at the halfway point. For example, a team of Microsoft researchers proclaimed that GPT-4 exhibited sparks of Artificial General Intelligence. They reasoned,

GPT-4 can solve novel and difficult tasks that span mathematics, coding, vision, medicine, law, psychology and more, without needing any special prompting. Moreover, in all of these tasks, GPT-4s performance is strikingly close to human-level performanceWe believe that it could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system.

In late 2023, rumors of an AGI-related breakthrough at OpenAI dubbed Q* began circulating. Reuters reported that the companys top researchers had raised concerns about an AI-related discovery that could threaten humanity. While these claims could not be verified, OpenAI did not dispute them either.

Finally, theres also no shortage of naysayers that believe its simply not possible for a machine to match and surpass human cognition. Unfortunately, we dont have enough evidence to declare either party correct. But as AI systems continue to get better with each passing month, the distinction between humans and machines will almost certainly soon become blurred.

See more here:
What is artificial general intelligence (AGI)? - Android Authority

E.U. Agrees on AI Act, Landmark Regulation for Artificial Intelligence – The New York Times

European Union policymakers agreed on Friday to a sweeping new law to regulate artificial intelligence, one of the worlds first comprehensive attempts to limit the use of a rapidly evolving technology that has wide-ranging societal and economic implications.

The law, called the A.I. Act, sets a new global benchmark for countries seeking to harness the potential benefits of the technology, while trying to protect against its possible risks, like automating jobs, spreading misinformation online and endangering national security. The law still needs to go through a few final steps for approval, but the political agreement means its key outlines have been set.

European policymakers focused on A.I.s riskiest uses by companies and governments, including those for law enforcement and the operation of crucial services like water and energy. Makers of the largest general-purpose A.I. systems, like those powering the ChatGPT chatbot, would face new transparency requirements. Chatbots and software that creates manipulated images such as deepfakes would have to make clear that what people were seeing was generated by A.I., according to E.U. officials and earlier drafts of the law.

Use of facial recognition software by police and governments would be restricted outside of certain safety and national security exemptions. Companies that violated the regulations could face fines of up to 7 percent of global sales.

Europe has positioned itself as a pioneer, understanding the importance of its role as global standard setter, Thierry Breton, the European commissioner who helped negotiate the deal, said in a statement.

Yet even as the law was hailed as a regulatory breakthrough, questions remained about how effective it would be. Many aspects of the policy were not expected to take effect for 12 to 24 months, a considerable length of time for A.I. development. And up until the last minute of negotiations, policymakers and countries were fighting over its language and how to balance the fostering of innovation with the need to safeguard against possible harm.

The deal reached in Brussels took three days of negotiations, including an initial 22-hour session that began Wednesday afternoon and dragged into Thursday. The final agreement was not immediately public as talks were expected to continue behind the scenes to complete technical details, which could delay final passage. Votes must be held in Parliament and the European Council, which comprises representatives from the 27 countries in the union.

Regulating A.I. gained urgency after last years release of ChatGPT, which became a worldwide sensation by demonstrating A.I.s advancing abilities. In the United States, the Biden administration recently issued an executive order focused in part on A.I.s national security effects. Britain, Japan and other nations have taken a more hands-off approach, while China has imposed some restrictions on data use and recommendation algorithms.

At stake are trillions of dollars in estimated value as A.I. is predicted to reshape the global economy. Technological dominance precedes economic dominance and political dominance, Jean-Nol Barrot, Frances digital minister, said this week.

Europe has been one of the regions furthest ahead in regulating A.I., having started working on what would become the A.I. Act in 2018. In recent years, E.U. leaders have tried to bring a new level of oversight to tech, akin to regulation of the health care or banking industries. The bloc has already enacted far-reaching laws related to data privacy, competition and content moderation.

A first draft of the A.I. Act was released in 2021. But policymakers found themselves rewriting the law as technological breakthroughs emerged. The initial version made no mention of general-purpose A.I. models like those that power ChatGPT.

Policymakers agreed to what they called a risk-based approach to regulating A.I., where a defined set of applications face the most oversight and restrictions. Companies that make A.I. tools that pose the most potential harm to individuals and society, such as in hiring and education, would need to provide regulators with proof of risk assessments, breakdowns of what data was used to train the systems and assurances that the software did not cause harm like perpetuating racial biases. Human oversight would also be required in creating and deploying the systems.

Some practices, such as the indiscriminate scraping of images from the internet to create a facial recognition database, would be banned outright.

The European Union debate was contentious, a sign of how A.I. has befuddled lawmakers. E.U. officials were divided over how deeply to regulate the newer A.I. systems for fear of handicapping European start-ups trying to catch up to American companies like Google and OpenAI.

The law added requirements for makers of the largest A.I. models to disclose information about how their systems work and evaluate for systemic risk, Mr. Breton said.

The new regulations will be closely watched globally. They will affect not only major A.I. developers like Google, Meta, Microsoft and OpenAI, but other businesses that are expected to use the technology in areas such as education, health care and banking. Governments are also turning more to A.I. in criminal justice and the allocation of public benefits.

Enforcement remains unclear. The A.I. Act will involve regulators across 27 nations and require hiring new experts at a time when government budgets are tight. Legal challenges are likely as companies test the novel rules in court. Previous E.U. legislation, including the landmark digital privacy law known as the General Data Protection Regulation, has been criticized for being unevenly enforced.

The E.U.s regulatory prowess is under question, said Kris Shrishak, a senior fellow at the Irish Council for Civil Liberties, who has advised European lawmakers on the A.I. Act. Without strong enforcement, this deal will have no meaning.

Read this article:
E.U. Agrees on AI Act, Landmark Regulation for Artificial Intelligence - The New York Times

Letter: Artificial intelligence is used far and wide – Niagara Now

Dear editor:

I want to thank Dr. William Brown for his excellent column (Science and the gods of the 21st century, The Lake Report, Dec. 7) and his review of scientific advances and inventions that have made the last few centuries so amazing and interesting.

Thats certainly the sort of information that will benefits all of us in this technological age.

However, I was stopped in my tracks when Dr. Brown apparently introduced politics into his fine scientific story.

Suddenly he included this paragraph, Weve heard how threatening AI can be in China these days as a means of tracking dissenters using data on smartphones and facial recognition technologies.

Surely the good doctor has read news items regarding law enforcement organizations in Canada and the United States that also seize phones and use facial-recognition in the pursuit of troublesome citizens.

Perhaps politics, in this instance, was just an unnecessary distraction.

George Dunbar Toronto

Here is the original post:
Letter: Artificial intelligence is used far and wide - Niagara Now

AI News Anchors Are Coming And We Are Doomed – Outkick

If you thought it was already difficult enough to find out which news is legitimate and which is fake, get ready because it just got a whole lot harder.

Beginning next year, California-based news station Channel 1 will roll out digital news anchors to provide stories and updates. The service will be used on free and ad supported TV and will include services like Tubo, Crackle and Pluto. To make matters worst, the frightening situation is moving extremely quickly with the network expected to begin utilizing AI anchors as early as this coming February.

We all know that we are way beyond the realm of pulling back on AI. Google, Twitter (X), ChatGPT and more are here and only going to expand. In some cases, the rise of AI has been beneficial think healthcare procedures, financial activities big tech backends, etc.

But one area where we must absolutely not allow AI to takeover is the media and newsgathering process.

The public is already paranoid, mentally disheveled and clueless when it comes to what is real versus fake news. You add in the fact that there are active propaganda campaigns, bots, disinformation and misinformation being thrown around second-by-second, as well as active censorship taking place on social media platforms we actually need MORE ways to get news now than ever before.

So when you have real-looking but fake, AI-bot style people giving you the news, thats at the very least troubling and can quickly get out of hand with disastrous consequences.

Afterall, whoever controls the news controls the people.

The biggest issue of AI integrating itself in the news world is that nobody is going to know who and now what to believe.

This isnt just Reuters having artificial intelligence copywrite a basic news story. You now have anchormen and women actively telling their version of the news.

Whats happening is that AI is so far beyond the average humans mental capacity to understand it that they just dont even bother. No one knows how AI works or what goes on behind the scenes it just is. This however only allows the powers-that-be and those that put in that information, curate it, integrate it and ultimately CREATE the news unlimited power.

We already have a major problem with the rise of deepfakes. Hell, they are being used in political campaign ads and idiots on social media are believing them! Remember that Queen Elizabeth II video years ago? ESPN got in hot water a few weeks ago tweeting out a doctored video of Damian Lillard as if he was reacting to a game he just played when in reality it was a clip from years ago.

Or how about this as a scary, real-world implication of AI? Criminals are now using technology that can recreate someones voice at 99.9% similarity to FAKE ransom calls to family members demanding they wire, Venmo or transfer money. Meanwhile the person was never in harm in the first place it was all deceptive criminal tactics using AI tech.

The rise of AI should make us MORE nervous, should make us question everything and is all the reason why we need actual human beings and not a computer script or robots to tell us the news of the day.

An immediate solution? Push back on AI while you still can especially when it comes to content and news.

Go here to see the original:
AI News Anchors Are Coming And We Are Doomed - Outkick

Artificial Intelligence in Healthcare: 3 Stocks Transforming the Industry – InvestorPlace

Source: shutterstock.com/Allies Interactive

Artificial intelligence is being applied across the healthcare sector to improve delivery overall. AI stocks have the potential to impact stock sectors across the market positively, and healthcare is no different in that regard.

Healthcare firms are leveraging artificial intelligence to change the industry overall fundamentally. For example, AI is being applied to Fields such as drug discovery and development. Artificial intelligence increases the speed with which it is possible to discover target compounds and genes as just one example.

Further, artificial intelligence is being applied in healthcare to improve fields such as personal medicine and may lead to breakthroughs such as gene therapy. Artificial intelligence can also be leveraged to increase the speed and accuracy of medical diagnoses. In short, artificial intelligence has the potential to fundamentally improve Healthcare and increase the price of stocks in the sector.

Source: shutterstock.com/sdecoret

Intuitive Surgical (NASDAQ:ISRG) is best known for its da Vinci robotic surgical systems. The company has leveraged artificial intelligence to improve healthcare delivery overall.

In July, the company launched its first digital tool that utilizes artificial intelligence to help surgeons better understand case data. That tool leverages data collected across its Da Vinci devices and Delivers correlations derived from that data.

Further, Intuitive Surgical has embedded AI into many of its products and systems. For example, the company embedded AI into its stapler system, and AI has been used to measure the pressure with which the stapler works. That data is being used to create optimal outcomes for procedures. Of course, Intuitive Surgical is also applying AI to imaging, which is then fed into the DaVinci systems pre-operation.

Fundamentally, the company continues to perform well with revenues increasing by 12% during the most recent period as procedure volume increased by 19%.

Source: Supavadee butradee / Shutterstock.com

GE Healthcare Technologies (NASDAQ:GEHC) has only existed as an independent entity for slightly less than a year. The company was spun out of its parents in December last year and has performed well since.

GE Healthcare Technologies released earnings at the end of October which were strong. The company reported better-than-expected earnings and margins that increased share prices.

So, there continue to be many reasons to consider investing in the stock from a fundamental perspective.

Gee Healthcare Technologies has recently released a few press releases that indicate the company continues to focus on the application of AI to healthcare.

For example, on Nov. 26th, the company announced that it had showcased more than 40 innovations. Among those were AI-enabled Imaging and ultrasound products and services. The company appears to be developing products and services to fill the expected gap within the radiology field. It is expected that there will be a shortfall of radiologists between now and 2033.

Then, a day later, on November 27th, the firm announced that it had released a suite of AI-enabled products and services called MyBreastAI. The suite is intended to increase the speed with which clinicians diagnose breast cancer.

Source: Dmitry Kalinovsky / Shutterstock.com

Schrodinger (NASDAQ:SDGR) is a company that provides software solutions to the pharmaceutical industry. The company engages in drug discovery that leverages physics and machine learning to speed the drug discovery process.

The company leverages physics-based models to understand the properties of a given compound. That means the company uses computational models to understand binding affinity and absorption to identify suitable compounds as potential drug candidates. The company feeds that data through a complex machine-learning model to help identify appropriate compounds more accurately and quickly.

The company is one of the leading names in the drug discovery sector. It benefits from solid reception and high price targets based on Wall Street. This makes it one of those stocks pioneering AI in healthcare.

Third-quarter revenues grew to $42.6 million from $37 million. Schrodingers software sales revenue increased during the period; however, investors should know that the company continues to produce losses overall.

On the date of publication, Alex Sirois did not have (either directly or indirectly) any positions in the securities mentioned in this article. The opinions expressed in this article are those of the writer, subject to the InvestorPlace.com Publishing Guidelines.

Alex Sirois is a freelance contributor to InvestorPlace whose personal stock investing style is focused on long-term, buy-and-hold, wealth-building stock picks. Having worked in several industries from e-commerce to translation to education and utilizing his MBA from George Washington University, he brings a diverse set of skills through which he filters his writing.

Read the rest here:
Artificial Intelligence in Healthcare: 3 Stocks Transforming the Industry - InvestorPlace