Archive for the ‘Artificial Intelligence’ Category

The UN needs to start regulating the Wild West of artificial intelligence – Business Standard

PRI GEN INT .MONTREAL FGN6 UN-ARTIFICIAL-INTELLIGENCE The United Nations needs to start regulating the Wild West' of artificial intelligence By Eleonore Fournier-Tombs, McGill University Montreal (Canada), Jun 1 (The Conversation) The European Commission recently published a proposal for a regulation on artificial intelligence (AI). This is the first document of its kind to attempt to tame the multi-tentacled beast that is artificial intelligence. The sun is starting to set on the Wild West days of artificial intelligence, writes Jeremy Kahn. He may have a point. When this regulation comes into effect, it will change the way that we conduct AI research and development. In the last few years of AI, there were few rules or regulations: if you could think it, you could build it. That is no longer the case, at least in the European Union. There is, however, a notable exception in the regulation, which is that is does not apply to international organisations like the United Nations. Naturally, the European Union does not have jurisdiction over the United Nations, which is governed by international law. The exclusion therefore does not come as a surprise, but does point to a gap in AI regulation. The United Nations therefore needs its own regulation for artificial intelligence, and urgently so. AI in the United Nations Artificial intelligence technologies have been used increasingly by the United Nations. Several research and development labs, including the Global Pulse Lab, the Jetson initiative by the UN High Commissioner for Refugees , UNICEF's Innovation Labs and the Centre for Humanitarian Data have focused their work on developing artificial intelligence solutions that would support the UN's mission, notably in terms of anticipating and responding to humanitarian crises. United Nations agencies have also used biometric identification to manage humanitarian logistics and refugee claims. The UNHCR developed a biometrics database which contained the information of 7.1 million refugees. The World Food Programme has also used biometric identification in aid distribution to refugees, coming under some criticism in 2019 for its use of this technology in Yemen. In parallel, the United Nations has partnered with private companies that provide analytical services. A notable example is the World Food Programme, which in 2019 signed a contract worth US$45 million with Palantir, an American firm specializing in data collection and artificial intelligence modelling. No oversight, regulation In 2014, the United States Bureau of Immigration and Customs Enforcement (ICE) awarded a US$20 billion-dollar contract to Palantir to track undocumented immigrants in the U.S., especially family members of children who had crossed the border alone. Several human rights watchdogs, including Amnesty International, have raised concerns about Palantir for human rights violations. Like most AI initiatives developed in recent years, this work has happened largely without regulatory oversight. There have been many attempts to set up ethical modes of operation, such as the Office for the Co-ordination of Humanitarian Affairs' Peer Review Framework, which sets out a method for overseeing the technical development and implementation of AI models. In the absence of regulation, however, tools such as these, without legal backing, are merely best practices with no means of enforcement. In the European Commission's AI regulation proposal, developers of high-risk systems must go through an authorization process before going to market, just like a new drug or car. They are required to put together a detailed package before the AI is available for use, involving a description of the models and data used, along with an explanation of how accuracy, privacy and discriminatory impacts will be addressed. The AI applications in question include biometric identification, categorisation and evaluation of the eligibility of people for public assistance benefits and services. They may also be used to dispatch of emergency first response services all of these are current uses of AI by the United Nations. Building trust Conversely, the lack of regulation at the United Nations can be considered a challenge for agencies seeking to adopt more effective and novel technologies. As such, many systems seem to have been developed and later abandoned without being integrated into actual decision-making systems. An example of this is the Jetson tool, which was developed by UNHCR to predict the arrival of internally displaced persons to refugee camps in Somalia. The tool does not appear to have been updated since 2019, and seems unlikely to transition into the humanitarian organization's operations. Unless, that is, it can be properly certified by a new regulatory system. Trust in AI is difficult to obtain, particularly in United Nations work, which is highly political and affects very vulnerable populations. The onus has largely been on data scientists to develop the credibility of their tools. A regulatory framework like the one proposed by the European Commission would take the pressure off data scientists in the humanitarian sector to individually justify their activities. Instead, agencies or research labs who wanted to develop an AI solution would work within a regulated system with built-in accountability. This would produce more effective, safer and more just applications and uses of AI technology. (The Conversation) AMS AMS 06011028 NNNN

(Only the headline and picture of this report may have been reworked by the Business Standard staff; the rest of the content is auto-generated from a syndicated feed.)

Business Standard has always strived hard to provide up-to-date information and commentary on developments that are of interest to you and have wider political and economic implications for the country and the world. Your encouragement and constant feedback on how to improve our offering have only made our resolve and commitment to these ideals stronger. Even during these difficult times arising out of Covid-19, we continue to remain committed to keeping you informed and updated with credible news, authoritative views and incisive commentary on topical issues of relevance.We, however, have a request.

As we battle the economic impact of the pandemic, we need your support even more, so that we can continue to offer you more quality content. Our subscription model has seen an encouraging response from many of you, who have subscribed to our online content. More subscription to our online content can only help us achieve the goals of offering you even better and more relevant content. We believe in free, fair and credible journalism. Your support through more subscriptions can help us practise the journalism to which we are committed.

Support quality journalism and subscribe to Business Standard.

Digital Editor

Original post:
The UN needs to start regulating the Wild West of artificial intelligence - Business Standard

Artificial Intelligence In Healthcare Market Worth $120.2 Billion By 2028: Grand View Research, Inc. – PRNewswire

SAN FRANCISCO, June 1, 2021 /PRNewswire/ -- The global artificial intelligence in healthcare marketsize is expected to reach USD 120.2 billion by 2028 and is expected to expand at a CAGR of 41.8% over the forecast period, according to a new report by Grand View Research, Inc. Growing technological advancements coupled with an increasing need for efficient and innovative solutions to enhance clinical and operational outcomes is contributing to market growth. The pressure for cutting down spending is rising globally as the cost of healthcare is growing faster than the growth of economies. Advancements in healthcare IT present opportunities to cut down spending by improving care delivery and clinical outcomes. Thus, the demand for AI technologies is expected to increase in the coming years.

Key suggestions from the report:

Read 150 page research report with ToC on "Artificial Intelligence In Healthcare Market Size, Share & Trends Analysis Report By Component (Software Solutions, Hardware, Service), By Application (Robot Assisted Surgery, Connected Machines, Clinical Trials), And Segment Forecasts, 2021 - 2028" at:https://www.grandviewresearch.com/industry-analysis/artificial-intelligence-ai-healthcare-market

Moreover, the ongoing COVID-19 pandemic and the introduction of technologically advanced products to improve patient care are factors anticipated to drive growth further in the coming years. The ongoing COVID-19 pandemic is further driving the adoption of AI in various applications such as clinical trials, diagnosis, and virtual assistants to add value to health care by analyzing complicated medical images of patient's complications and supporting clinicians in detection as well as diagnosis. Moreover, an increase in the number of AI startups coupled with high investments by venture capitalist firms for developing innovative technologies that support fast and effective patient management, due to a significant increase in the number of patients suffering from chronic diseases, is driving the market.

In addition, the shortage of public health workforce has become a major concern in many countries around the world. This can mainly be attributed to the growing demand for physicians, which is higher than the supply of physicians. As per the WHO estimates in 2019, the global shortage of skilled personnel including nurses, doctors, and other professionals was approximately 4.3 million. Thus, the shortage of a skilled workforce is contributing to the demand for artificial intelligence-enabled systems in the industry.

Grand View Research has segmented the global artificial intelligence in the healthcare market on the basis of component, application, and region:

Find more research reports on Healthcare IT Industry, by Grand View Research:

Gain access to Grand View Compass, our BI enabled intuitive market research database of 10,000+ reports

About Grand View Research

Grand View Research, U.S.-based market research and consulting company, provides syndicated as well as customized research reports and consulting services. Registered in California and headquartered in San Francisco, the company comprises over 425 analysts and consultants, adding more than 1200 market research reports to its vast database each year. These reports offer in-depth analysis on 46 industries across 25 major countries worldwide. With the help of an interactive market intelligence platform, Grand View Research helps Fortune 500 companies and renowned academic institutes understand the global and regional business environment and gauge the opportunities that lie ahead.

Contact: Sherry James Corporate Sales Specialist, USA Grand View Research, Inc. Phone: 1-415-349-0058 Toll Free: 1-888-202-9519 Email: [emailprotected]Web: https://www.grandviewresearch.comFollow Us: LinkedIn| Twitter

SOURCE Grand View Research, Inc.

Link:
Artificial Intelligence In Healthcare Market Worth $120.2 Billion By 2028: Grand View Research, Inc. - PRNewswire

NIT-K to introduce B.Tech in Artificial Intelligence – The Hindu

The Department of Information Technology, National Institute of Technology Karnataka (NIT-K), Surathkal, has decided to start a new four-year B.Tech. course in Artificial Intelligence from the academic year 2021-22.

The Academic Senate, Board of the institute and the Union Ministry of Education have approved the course. Admissions will be through JEE (Main) score.

Karanam Uma Maheshwar Rao, Director, NIT-K, said in a release on Thursday that this degree would prepare students for industry or further study by offering specialisations in different areas of AI such as data science, human-centred computing, cyber-physical systems, and robotics.

Its curriculum will focus on the use of inputs such as video, speech, and big data to make decisions or enhance human capabilities.

Prof. Rao added: This specialisation empowers students to build intelligent machines, software, or applications with state-of-the-art technology using machine learning, data analytics, and data visualisation technologies.

The Director said that earlier Artificial Intelligence was a subset of Computer Science, but in recent years Artificial Intelligence has grown enough to qualify as a distinctive and a bigger unit. As a result, job opportunities for the undergraduates of B.Tech (AI) courses are different from conventional IT jobs.

He added that the new course is in conformance with the National Education Policy 2020, which stresses the need to improve the skilled workforce involving mathematics, computer science, and data science, in conjunction with multidisciplinary abilities across the sciences, social sciences, and humanities.

Visit link:
NIT-K to introduce B.Tech in Artificial Intelligence - The Hindu

What We Should Know About Artificial Intelligence | Omri Hurwitz | The Blogs – The Times of Israel

Imagine for a second that you are reading an article, another article, not this one.

And that article is very well written, it is intriguing, it raises questions and counters with smart, thorough answers.

I have to know who the writer is! you tell yourself. You look next to the title and it is saying:Written by Jimmy an Artificial Intelligence Machine.

First, you dont understand, but you soon remember that there is this new AI (Artificial Intelligence) software that can write full-length articles. Well, not only articles, it can write novels, and even, deeply moving poems.

How does that make you feel? Knowing that a machine, a very smart one, had you moved this way? Made you laugh, made you cry, made you so very happy. Does it matter to you?

You might say: what do I care? As long as it helped me feel a certain way.

Well, if you are a writer, you do care. It is your job. You get paid to write, and if it takes a machine a few seconds to outwork you, then you are in trouble.

Let me tell you this though. You might be in another profession, another kind of artist, maybe a programmer, soon, if not already, there will be an AI software that can do what you do. Just faster. Much faster. So, what will you do when that happens?

You will be the person in charge of making sure the machine works properly. Until they invent a machine for that, and you will be the one making sure the machine, that oversees the other machine, is working properly. And it keeps ongoing.

In our current landscape, some of us get very excited when we hear about the new AI invention that is going to make our world much more productive. But lets not forget, this usually means that someone out there might need to adjust and find another thing to do in this world.

Dont get me wrong, there is some extremely amazing AI software out there that is extremely helpful for human lives. Most of them have much more pros than cons. And the world is surely in constant evolution.

With that being said, maybe we need to find a way to measure specific pros and cons to every new AI invention. This can help us know which is more likely to make our world better for us human beings, animals, plants, and trees.

We have to gain some rational clarity and morality on this subject. Because if dont, then there will be a machine that will make these decisions for us.

And that machine might be so smart, it decides, on its own, to make decisions based on what is better for its own sake.

Omri Hurwitz is a Tech Marketer and Media Strategist. His client portfolio consists of some of the leading companies and start-ups in Tech. He also has a show where he interviews leading personas from a variety of industries, to talk about the mental and mindful side of his guests and how it helps them in their personal & professional lives.

Follow this link:
What We Should Know About Artificial Intelligence | Omri Hurwitz | The Blogs - The Times of Israel

The potential of artificial intelligence to bring equity in health care – MIT News

Health care is at a junction, a point where artificial intelligence tools are being introduced to all areas of the space. This introduction comes with great expectations: AI has the potential to greatly improve existing technologies, sharpen personalized medicines, and, with an influx of big data, benefit historically underserved populations.

But in order to do those things, the health care community must ensure that AI tools are trustworthy, and that they dont end up perpetuating biases that exist in the current system. Researchers at the MIT Abdul Latif Jameel Clinic for Machine Learning in Health (Jameel Clinic), an initiative to support AI research in health care, call for creating a robust infrastructure that can aid scientists and clinicians in pursuing this mission.

Fair and equitable AI for health care

The Jameel Clinic recently hosted the AI for Health Care Equity Conference to assess current state-of-the-art work in this space, including new machine learning techniques that support fairness, personalization, and inclusiveness; identify key areas of impact in health care delivery; and discuss regulatory and policy implications.

Nearly 1,400 people virtually attended the conference to hear from thought leaders in academia, industry, and government who are working to improve health care equity and further understand the technical challenges in this space and paths forward.

During the event, Regina Barzilay, the School of Engineering Distinguished Professor of AI and Health and the AI faculty lead for Jameel Clinic, and Bilal Mateen, clinical technology lead at the Wellcome Trust, announced the Wellcome Fund grant conferred to Jameel Clinic to create a community platform supporting equitable AI tools in health care.

The projects ultimate goal is not to solve an academic question or reach a specific research benchmark, but to actually improve the lives of patients worldwide. Researchers at Jameel Clinic insist that AI tools should not be designed with a single population in mind, but instead be crafted to be reiterative and inclusive, to serve any community or subpopulation. To do this, a given AI tool needs to be studied and validated across many populations, usually in multiple cities and countries. Also on the project wish list is to create open access for the scientific community at large, while honoring patient privacy, to democratize the effort.

What became increasingly evident to us as a funder is that the nature of science has fundamentally changed over the last few years, and is substantially more computational by design than it ever was previously, says Mateen.

The clinical perspective

This call to action is a response to health care in 2020. At the conference, Collin Stultz, a professor of electrical engineering and computer science and a cardiologist at Massachusetts General Hospital, spoke on how health care providers typically prescribe treatments and why these treatments are often incorrect.

In simplistic terms, a doctor collects information on their patient, then uses that information to create a treatment plan. The decisions providers make can improve the quality of patients lives or make them live longer, but this does not happen in a vacuum, says Stultz.

Instead, he says that a complex web of forces can influence how a patient receives treatment. These forces go from being hyper-specific to universal, ranging from factors unique to an individual patient, to bias from a provider, such as knowledge gleaned from flawed clinical trials, to broad structural problems, like uneven access to care.

Datasets and algorithms

A central question of the conference revolved around how race is represented in datasets, since its a variable that can be fluid, self-reported, and defined in non-specific terms.

The inequities were trying to address are large, striking, and persistent, says Sharrelle Barber, an assistant professor of epidemiology and biostatistics at Drexel University. We have to think about what that variable really is. Really, its a marker of structural racism, says Barber. Its not biological, its not genetic. Weve been saying that over and over again.

Some aspects of health are purely determined by biology, such as hereditary conditions like cystic fibrosis, but the majority of conditions are not straightforward. According to Massachusetts General Hospital oncologist T. Salewa Oseni, when it comes to patient health and outcomes, research tends to assume biological factors have outsized influence, but socioeconomic factors should be considered just as seriously.

Even as machine learning researchers detect preexisting biases in the health care system, they must also address weaknesses in algorithms themselves, as highlighted by a series of speakers at the conference. They must grapple with important questions that arise in all stages of development, from the initial framing of what the technology is trying to solve to overseeing deployment in the real world.

Irene Chen, a PhD student at MIT studying machine learning, examines all steps of the development pipeline through the lens of ethics. As a first-year doctoral student, Chen was alarmed to find an out-of-the-box algorithm, which happened to project patient mortality, churning out significantly different predictions based on race. This kind of algorithm can have real impacts, too; it guides how hospitals allocate resources to patients.

Chen set about understanding why this algorithm produced such uneven results. In later work, she defined three specific sources of bias that could be detangled from any model. The first is bias, but in a statistical sense maybe the model is not a good fit for the research question. The second is variance, which is controlled by sample size. The last source is noise, which has nothing to do with tweaking the model or increasing the sample size. Instead, it indicates that something has happened during the data collection process, a step way before model development. Many systemic inequities, such as limited health insurance or a historic mistrust of medicine in certain groups, get rolled up into noise.

Once you identify which component it is, you can propose a fix, says Chen.

Marzyeh Ghassemi, an assistant professor at the University of Toronto and an incoming professor at MIT, has studied the trade-off between anonymizing highly personal health data and ensuring that all patients are fairly represented. In cases like differential privacy, a machine-learning tool that guarantees the same level of privacy for every data point, individuals who are too unique in their cohort started to lose predictive influence in the model. In health data, where trials often underrepresent certain populations, minorities are the ones that look unique, says Ghassemi.

We need to create more data, it needs to be diverse data, she says. These robust, private, fair, high-quality algorithms we're trying to train require large-scale data sets for research use.

Beyond Jameel Clinic, other organizations are recognizing the power of harnessing diverse data to create more equitable health care. Anthony Philippakis, chief data officer at the Broad Institute of MIT and Harvard, presented on the All of Us research program, an unprecedented project from the National Institutes of Health that aims to bridge the gap for historically under-recognized populations by collecting observational and longitudinal health data on over 1 million Americans. The database is meant to uncover how diseases present across different sub-populations.

One of the largest questions of the conference, and of AI in general, revolves around policy. Kadija Ferryman, a cultural anthropologist and bioethicist at New York University, points out that AI regulation is in its infancy, which can be a good thing. Theres a lot of opportunities for policy to be created with these ideas around fairness and justice, as opposed to having policies that have been developed, and then working to try to undo some of the policy regulations, says Ferryman.

Even before policy comes into play, there are certain best practices for developers to keep in mind. Najat Khan, chief data science officer at Janssen R&D, encourages researchers to be extremely systematic and thorough up front when choosing datasets and algorithms; detailed feasibility on data source, types, missingness, diversity, and other considerations are key. Even large, common datasets contain inherent bias.

Even more fundamental is opening the door to a diverse group of future researchers.

We have to ensure that we are developing and investing back in data science talent that are diverse in both their backgrounds and experiences and ensuring they have opportunities to work on really important problems for patients that they care about, says Khan. If we do this right, youll see ... and we are already starting to see ... a fundamental shift in the talent that we have a more bilingual, diverse talent pool.

The AI for Health Care Equity Conference was co-organized by MITs Jameel Clinic; Department of Electrical Engineering and Computer Science; Institute for Data, Systems, and Society; Institute for Medical Engineering and Science; and the MIT Schwarzman College of Computing.

Go here to read the rest:
The potential of artificial intelligence to bring equity in health care - MIT News