Archive for the ‘Artificial Intelligence’ Category

E.U. Agrees on AI Act, Landmark Regulation for Artificial Intelligence – The New York Times

European Union policymakers agreed on Friday to a sweeping new law to regulate artificial intelligence, one of the worlds first comprehensive attempts to limit the use of a rapidly evolving technology that has wide-ranging societal and economic implications.

The law, called the A.I. Act, sets a new global benchmark for countries seeking to harness the potential benefits of the technology, while trying to protect against its possible risks, like automating jobs, spreading misinformation online and endangering national security. The law still needs to go through a few final steps for approval, but the political agreement means its key outlines have been set.

European policymakers focused on A.I.s riskiest uses by companies and governments, including those for law enforcement and the operation of crucial services like water and energy. Makers of the largest general-purpose A.I. systems, like those powering the ChatGPT chatbot, would face new transparency requirements. Chatbots and software that creates manipulated images such as deepfakes would have to make clear that what people were seeing was generated by A.I., according to E.U. officials and earlier drafts of the law.

Use of facial recognition software by police and governments would be restricted outside of certain safety and national security exemptions. Companies that violated the regulations could face fines of up to 7 percent of global sales.

Europe has positioned itself as a pioneer, understanding the importance of its role as global standard setter, Thierry Breton, the European commissioner who helped negotiate the deal, said in a statement.

Yet even as the law was hailed as a regulatory breakthrough, questions remained about how effective it would be. Many aspects of the policy were not expected to take effect for 12 to 24 months, a considerable length of time for A.I. development. And up until the last minute of negotiations, policymakers and countries were fighting over its language and how to balance the fostering of innovation with the need to safeguard against possible harm.

The deal reached in Brussels took three days of negotiations, including an initial 22-hour session that began Wednesday afternoon and dragged into Thursday. The final agreement was not immediately public as talks were expected to continue behind the scenes to complete technical details, which could delay final passage. Votes must be held in Parliament and the European Council, which comprises representatives from the 27 countries in the union.

Regulating A.I. gained urgency after last years release of ChatGPT, which became a worldwide sensation by demonstrating A.I.s advancing abilities. In the United States, the Biden administration recently issued an executive order focused in part on A.I.s national security effects. Britain, Japan and other nations have taken a more hands-off approach, while China has imposed some restrictions on data use and recommendation algorithms.

At stake are trillions of dollars in estimated value as A.I. is predicted to reshape the global economy. Technological dominance precedes economic dominance and political dominance, Jean-Nol Barrot, Frances digital minister, said this week.

Europe has been one of the regions furthest ahead in regulating A.I., having started working on what would become the A.I. Act in 2018. In recent years, E.U. leaders have tried to bring a new level of oversight to tech, akin to regulation of the health care or banking industries. The bloc has already enacted far-reaching laws related to data privacy, competition and content moderation.

A first draft of the A.I. Act was released in 2021. But policymakers found themselves rewriting the law as technological breakthroughs emerged. The initial version made no mention of general-purpose A.I. models like those that power ChatGPT.

Policymakers agreed to what they called a risk-based approach to regulating A.I., where a defined set of applications face the most oversight and restrictions. Companies that make A.I. tools that pose the most potential harm to individuals and society, such as in hiring and education, would need to provide regulators with proof of risk assessments, breakdowns of what data was used to train the systems and assurances that the software did not cause harm like perpetuating racial biases. Human oversight would also be required in creating and deploying the systems.

Some practices, such as the indiscriminate scraping of images from the internet to create a facial recognition database, would be banned outright.

The European Union debate was contentious, a sign of how A.I. has befuddled lawmakers. E.U. officials were divided over how deeply to regulate the newer A.I. systems for fear of handicapping European start-ups trying to catch up to American companies like Google and OpenAI.

The law added requirements for makers of the largest A.I. models to disclose information about how their systems work and evaluate for systemic risk, Mr. Breton said.

The new regulations will be closely watched globally. They will affect not only major A.I. developers like Google, Meta, Microsoft and OpenAI, but other businesses that are expected to use the technology in areas such as education, health care and banking. Governments are also turning more to A.I. in criminal justice and the allocation of public benefits.

Enforcement remains unclear. The A.I. Act will involve regulators across 27 nations and require hiring new experts at a time when government budgets are tight. Legal challenges are likely as companies test the novel rules in court. Previous E.U. legislation, including the landmark digital privacy law known as the General Data Protection Regulation, has been criticized for being unevenly enforced.

The E.U.s regulatory prowess is under question, said Kris Shrishak, a senior fellow at the Irish Council for Civil Liberties, who has advised European lawmakers on the A.I. Act. Without strong enforcement, this deal will have no meaning.

Read this article:
E.U. Agrees on AI Act, Landmark Regulation for Artificial Intelligence - The New York Times

Letter: Artificial intelligence is used far and wide – Niagara Now

Dear editor:

I want to thank Dr. William Brown for his excellent column (Science and the gods of the 21st century, The Lake Report, Dec. 7) and his review of scientific advances and inventions that have made the last few centuries so amazing and interesting.

Thats certainly the sort of information that will benefits all of us in this technological age.

However, I was stopped in my tracks when Dr. Brown apparently introduced politics into his fine scientific story.

Suddenly he included this paragraph, Weve heard how threatening AI can be in China these days as a means of tracking dissenters using data on smartphones and facial recognition technologies.

Surely the good doctor has read news items regarding law enforcement organizations in Canada and the United States that also seize phones and use facial-recognition in the pursuit of troublesome citizens.

Perhaps politics, in this instance, was just an unnecessary distraction.

George Dunbar Toronto

Here is the original post:
Letter: Artificial intelligence is used far and wide - Niagara Now

AI News Anchors Are Coming And We Are Doomed – Outkick

If you thought it was already difficult enough to find out which news is legitimate and which is fake, get ready because it just got a whole lot harder.

Beginning next year, California-based news station Channel 1 will roll out digital news anchors to provide stories and updates. The service will be used on free and ad supported TV and will include services like Tubo, Crackle and Pluto. To make matters worst, the frightening situation is moving extremely quickly with the network expected to begin utilizing AI anchors as early as this coming February.

We all know that we are way beyond the realm of pulling back on AI. Google, Twitter (X), ChatGPT and more are here and only going to expand. In some cases, the rise of AI has been beneficial think healthcare procedures, financial activities big tech backends, etc.

But one area where we must absolutely not allow AI to takeover is the media and newsgathering process.

The public is already paranoid, mentally disheveled and clueless when it comes to what is real versus fake news. You add in the fact that there are active propaganda campaigns, bots, disinformation and misinformation being thrown around second-by-second, as well as active censorship taking place on social media platforms we actually need MORE ways to get news now than ever before.

So when you have real-looking but fake, AI-bot style people giving you the news, thats at the very least troubling and can quickly get out of hand with disastrous consequences.

Afterall, whoever controls the news controls the people.

The biggest issue of AI integrating itself in the news world is that nobody is going to know who and now what to believe.

This isnt just Reuters having artificial intelligence copywrite a basic news story. You now have anchormen and women actively telling their version of the news.

Whats happening is that AI is so far beyond the average humans mental capacity to understand it that they just dont even bother. No one knows how AI works or what goes on behind the scenes it just is. This however only allows the powers-that-be and those that put in that information, curate it, integrate it and ultimately CREATE the news unlimited power.

We already have a major problem with the rise of deepfakes. Hell, they are being used in political campaign ads and idiots on social media are believing them! Remember that Queen Elizabeth II video years ago? ESPN got in hot water a few weeks ago tweeting out a doctored video of Damian Lillard as if he was reacting to a game he just played when in reality it was a clip from years ago.

Or how about this as a scary, real-world implication of AI? Criminals are now using technology that can recreate someones voice at 99.9% similarity to FAKE ransom calls to family members demanding they wire, Venmo or transfer money. Meanwhile the person was never in harm in the first place it was all deceptive criminal tactics using AI tech.

The rise of AI should make us MORE nervous, should make us question everything and is all the reason why we need actual human beings and not a computer script or robots to tell us the news of the day.

An immediate solution? Push back on AI while you still can especially when it comes to content and news.

Go here to see the original:
AI News Anchors Are Coming And We Are Doomed - Outkick

Artificial Intelligence in Healthcare: 3 Stocks Transforming the Industry – InvestorPlace

Source: shutterstock.com/Allies Interactive

Artificial intelligence is being applied across the healthcare sector to improve delivery overall. AI stocks have the potential to impact stock sectors across the market positively, and healthcare is no different in that regard.

Healthcare firms are leveraging artificial intelligence to change the industry overall fundamentally. For example, AI is being applied to Fields such as drug discovery and development. Artificial intelligence increases the speed with which it is possible to discover target compounds and genes as just one example.

Further, artificial intelligence is being applied in healthcare to improve fields such as personal medicine and may lead to breakthroughs such as gene therapy. Artificial intelligence can also be leveraged to increase the speed and accuracy of medical diagnoses. In short, artificial intelligence has the potential to fundamentally improve Healthcare and increase the price of stocks in the sector.

Source: shutterstock.com/sdecoret

Intuitive Surgical (NASDAQ:ISRG) is best known for its da Vinci robotic surgical systems. The company has leveraged artificial intelligence to improve healthcare delivery overall.

In July, the company launched its first digital tool that utilizes artificial intelligence to help surgeons better understand case data. That tool leverages data collected across its Da Vinci devices and Delivers correlations derived from that data.

Further, Intuitive Surgical has embedded AI into many of its products and systems. For example, the company embedded AI into its stapler system, and AI has been used to measure the pressure with which the stapler works. That data is being used to create optimal outcomes for procedures. Of course, Intuitive Surgical is also applying AI to imaging, which is then fed into the DaVinci systems pre-operation.

Fundamentally, the company continues to perform well with revenues increasing by 12% during the most recent period as procedure volume increased by 19%.

Source: Supavadee butradee / Shutterstock.com

GE Healthcare Technologies (NASDAQ:GEHC) has only existed as an independent entity for slightly less than a year. The company was spun out of its parents in December last year and has performed well since.

GE Healthcare Technologies released earnings at the end of October which were strong. The company reported better-than-expected earnings and margins that increased share prices.

So, there continue to be many reasons to consider investing in the stock from a fundamental perspective.

Gee Healthcare Technologies has recently released a few press releases that indicate the company continues to focus on the application of AI to healthcare.

For example, on Nov. 26th, the company announced that it had showcased more than 40 innovations. Among those were AI-enabled Imaging and ultrasound products and services. The company appears to be developing products and services to fill the expected gap within the radiology field. It is expected that there will be a shortfall of radiologists between now and 2033.

Then, a day later, on November 27th, the firm announced that it had released a suite of AI-enabled products and services called MyBreastAI. The suite is intended to increase the speed with which clinicians diagnose breast cancer.

Source: Dmitry Kalinovsky / Shutterstock.com

Schrodinger (NASDAQ:SDGR) is a company that provides software solutions to the pharmaceutical industry. The company engages in drug discovery that leverages physics and machine learning to speed the drug discovery process.

The company leverages physics-based models to understand the properties of a given compound. That means the company uses computational models to understand binding affinity and absorption to identify suitable compounds as potential drug candidates. The company feeds that data through a complex machine-learning model to help identify appropriate compounds more accurately and quickly.

The company is one of the leading names in the drug discovery sector. It benefits from solid reception and high price targets based on Wall Street. This makes it one of those stocks pioneering AI in healthcare.

Third-quarter revenues grew to $42.6 million from $37 million. Schrodingers software sales revenue increased during the period; however, investors should know that the company continues to produce losses overall.

On the date of publication, Alex Sirois did not have (either directly or indirectly) any positions in the securities mentioned in this article. The opinions expressed in this article are those of the writer, subject to the InvestorPlace.com Publishing Guidelines.

Alex Sirois is a freelance contributor to InvestorPlace whose personal stock investing style is focused on long-term, buy-and-hold, wealth-building stock picks. Having worked in several industries from e-commerce to translation to education and utilizing his MBA from George Washington University, he brings a diverse set of skills through which he filters his writing.

Read the rest here:
Artificial Intelligence in Healthcare: 3 Stocks Transforming the Industry - InvestorPlace

Board to hear artificial intelligence guidance recommendations … – The Florida Bar

When it meets December 1 in Destin, the Board of Governors will consider a special committees proposal for offering guidance on the use of artificial intelligence in the practice of law.

The Special Committee on AI Tools & Resources, formed by President Scott Westheimer this summer, is proposing a series of rule revisions that, if approved, would be forwarded to the Supreme Court for final consideration.

A proposed amendment to the final paragraph of the commentary to Bar Rule 4-1.1 (Competence) would make it clear that a lawyers understanding of the benefits and risks associated with the use of technology, includes generative artificial intelligence.

A proposed amendment to Bar Rule 4-1.6 (Confidentiality of Information) would add a warning to a portion of the commentary subtitled Acting Competently to Preserve Client Confidentiality. The proposed sentence would state, For example, a lawyer should be aware that generative artificial intelligence may create risks to the lawyers duty of confidentiality.

A proposed amendment to Bar Rule 4-5.3 (Responsibilities Regarding Nonlawyer Assistants) would add a sentence to the first paragraph of the comment that would state, A lawyer should also consider safeguards when assistants use technologies such as artificial intelligence and within the first paragraph under the heading Nonlawyers Outside the Firm, would add using generative artificial intelligence.

Another proposed amendment, to Bar Rule 4-5.1 (Responsibilities of Partners, Managers, and Supervisory Lawyers), would add a sentence to the second paragraph of the comment that states, consider safeguards for the firms use of technologies such as generative artificial intelligence.

A staff analysis refers to an incident that has become a red flag for lawyers nationwide when the subject of artificial intelligence arises. It notes that [l]awyers have improperly used generative AI to their detriment.

For example, a lawyer has been sanctioned in New York for filing a legal document generated by AI (ChatGPT) that included citations that were made up by the generative AI application, the analysis states.

Bar rules themselves are broad enough principles to address AI, the analysis notes, but commentary will alert Florida lawyers to their responsibilities regarding AI.

In other action, the Board will be asked to affirm or reverse a Professional Ethics Committee decision that a law firm may not ethically identify a nonlawyer as the firms Chief Executive Officer (CEO), despite limitations on the nonlawyers authority.

The inquiring firm asserted that the nonlawyer CEO would report to the firms supervising partner, will not have a policymaking function, will not supervise the practice of law, and will be paid a salary and bonuses unconnected to the law firms profits.

Further, the inquiry states that the position will not be known to the public other than on the firms website but concedes that it would be included on business cards and possibly other written material.

The Professional Ethics Committee voted on June 23 to affirm a staff opinion that the use of the title by a nonlawyer would violate Bar Rule 4-8.6 (c) which provides that [n]o person may serve as a partner, manager, director or executive officer of an authorized business entity that is engaged in the practice of law in Florida unless such person is legally qualified to render legal services in this state.

The inquiring firm is asking for the board review, arguing that application of the rule is limitedto only those instances where the nonlawyer employee engages in a policymaking function.

In other business, the board will be asked to consider:

Go here to see the original:
Board to hear artificial intelligence guidance recommendations ... - The Florida Bar