Archive for the ‘Machine Learning’ Category

CERC plans to embrace AI, machine learning to improve functioning – Business Standard

The apex power sector regulator, the Central Electricity Regulatory Commission (CERC), is planning to set up an artificial intelligence (AI)-based regulatory expert system tool (REST) for improving access to information and assist the commission in discharge of its duties. So far, only the Supreme Court (SC) has an electronic filing (e-filing) system and is in the process of building an AI-based back-end service.

The CERC will be the first such quasi-judicial regulatory body to embrace AI and machine learning (ML). The decision comes at a time when the CERC has been shut for four ...

Key stories on business-standard.com are available to premium subscribers only.

MONTHLY STAR

Business Standard Digital Monthly Subscription

Complete access to the premium product

Convenient - Pay as you go

Pay using Master/Visa Credit Card & ICICI VISA Debit Card

Auto renewed (subject to your card issuer's permission)

Cancel any time in the future

Note: Subscription will be auto renewed, you may cancel any time in the future without any questions asked.

Requires personal information

SMART MONTHLY

Business Standard Digital - 12 Months

Get 12 months of Business Standard digital access

Single Seamless Sign-up to Business Standard Digital

Convenient - Once a year payment

Pay using an instrument of your choice - Credit/Debit Cards, Net Banking, Payment Wallets accepted

Exclusive Invite to select Business Standard events

Note: Subscription will be auto renewed, you may cancel any time in the future without any questions asked.

Business Standard has always strived hard to provide up-to-date information and commentary on developments that are of interest to you and have wider political and economic implications for the country and the world. Your encouragement and constant feedback on how to improve our offering have only made our resolve and commitment to these ideals stronger. Even during these difficult times arising out of Covid-19, we continue to remain committed to keeping you informed and updated with credible news, authoritative views and incisive commentary on topical issues of relevance.We, however, have a request.

As we battle the economic impact of the pandemic, we need your support even more, so that we can continue to offer you more quality content. Our subscription model has seen an encouraging response from many of you, who have subscribed to our online content. More subscription to our online content can only help us achieve the goals of offering you even better and more relevant content. We believe in free, fair and credible journalism. Your support through more subscriptions can help us practise the journalism to which we are committed.

Support quality journalism and subscribe to Business Standard.

Digital Editor

First Published: Fri, January 15 2021. 06:10 IST

Read more:
CERC plans to embrace AI, machine learning to improve functioning - Business Standard

Machine Learning and Life-and-Death Decisions on the Battlefield – War on the Rocks

In 1946 the New York Times revealed one of World War IIs top secrets an amazing machine which applies electronic speeds for the first time to mathematical tasks hitherto too difficult and cumbersome for solution. One of the machines creators offered that its purpose was to replace, as far as possible, the human brain. While this early version of a computer did not replace the human brain, it did usher in a new era in which, according to the historian Jill Lepore, technological change wildly outpaced the human capacity for moral reckoning.

That era continues with the application of machine learning to questions of command and control. The application of machine learning is in some areas already a reality the U.S. Air Force, for example, has used it as a working aircrew member on a military aircraft, and the U.S. Army is using it to choose the right shooter for a target identified by an overhead sensor. The military is making strides toward using machine learning algorithms to direct robotic systems, analyze large sets of data, forecast threats, and shape strategy. Using algorithms in these areas and others offers awesome military opportunities from saving person-hours in planning to outperforming human pilots in dogfights to using a multihypothesis semantic engine to improve our understanding of global events and trends. Yet with the opportunity of machine learning comes ethical risk the military could surrender life-and-death choice to algorithms, and surrendering choice abdicates ones status as a moral actor.

So far, the debate about algorithms role in battlefield choice has been eitheror: Either algorithms should make life-and-death choices because there is no other way to keep pace on an increasingly autonomous battlefield, or humans should make life-and-death choices because there is no other way to maintain moral standing in war. This is a false dichotomy. Choice is not a unitary thing to be handed over either to algorithms or to people. At all levels of decision-making (i.e., tactical, operational, and strategic), choice is the result of a several-step process. The question is not whether algorithms or humans should make life-and-death choices, but rather which steps in the process each should be responsible for. By breaking choice into its constituent parts and training servicemembers in decision science the military can both increase decision speed and maintain moral standing. This article proposes how it can do both. It describes the constituent components of a choice, then discusses which of those components should be performed by machine learning algorithms and which require human input.

What Decisions Are and What It Takes To Make Them

Consider a fighter pilot hunting surface-to-air missiles. When the pilot attacks, she is determining that her choice, relative to other possibilities before her, maximizes expected net benefit, or utility. She may not consciously process the decision in these terms and may not make the calculation perfectly, but she is nonetheless determining which decision optimizes expected costs and benefits. To be clear, the example of the fighter pilot is not meant to bound the discussion. The basic conceptual process is the same whether the decision-makers are trigger-pullers on the front lines or commanders in distant operations centers. The scope and particulars of a decision change at higher levels of responsibility, of course, from risking one unit to many, or risking one bystanders life to risking hundreds. Regardless of where the decision-maker sits or rather where the authority to choose to employ force lawfully resides choice requires the same four fundamental steps.

The first step is to list the alternatives available to the decision-maker. The fighter pilot, again just for example, might have two alternatives: attack the missile system from a relatively safer long-range approach, or attack from closer range with more risk but a higher probability of a successful attack. The second step is to take each of these alternatives and define the relevant possible results. In this case, the pilots relevant outcomes might include killing the missile while surviving, killing the missile without surviving, failing to kill the system but surviving, and, lastly, failing to kill the missile while also failing to survive.

The third step is to make a conditional probability estimate, or an estimate of the likelihood of each result assuming a given alternative. If the pilot goes in close, what is the probability that she kills the missile and survives? What is the same probability for the attack from long range? And so on for each outcome of each alternative.

So far the pilot has determined what she can do, what may happen as a result, and how likely each result is. She now needs to say how much she values each result. To do this she needs to identify how much she cares about each dimension of value at play in the choice, which in highly simplified terms are the benefit to mission that comes from killing the missile, and the cost that comes from sacrificing her life, the lives of targeted combatants, and the lives of bystanders. It is not enough to say that killing the missile is beneficial and sacrificing life is costly. She needs to put benefit and cost into a single common metric, sometimes called a utility, so that the value of one can be directly compared to the value of the other. This relative comparison is known as a value trade-off, the fourth step in the process. Whether the decision-maker is on the tactical edge or making high-level decisions, the trade-off takes the same basic form: The decision-maker weighs the value of attaining a military objective against the cost of dollars and lives (friendly, enemy, and civilian) needed to attain it. This trade-off is at once an ethical and a military judgment it puts a price on life at the same time that it puts a price on a military objective.

Once these four steps are complete, rational choice is a matter of fairly simple math. Utilities are weighted by an outcomes likelihood high-likelihood outcomes get more weight and are more likely to drive the final choice.

It is important to note that, for both human and machine decision-makers, rational is not necessarily the same thing as ethical or successful. The rational choice process is the best way, given uncertainty, to optimize what decision-makers say they value. It is not a way of saying that one has the right values and does not guarantee a good outcome. Good decisions will still occasionally lead to bad outcomes, but this decision-making process optimizes results in the long run.

At least in the U.S. Air Force, pilots do not consciously step through expected utility calculations in the cockpit. Nor is it reasonable to assume that they should performing the mission is challenging enough. For human decision-makers, explicitly working through the steps of expected utility calculations is impractical, at least on a battlefield. Its a different story, however, with machines. If the military wants to use algorithms to achieve decision speed in battle, then it needs to make the components of a decision computationally tractable that is, the four steps above need to reduce to numbers. The question becomes whether it is possible to provide the numbers in such a way that combines the speed that machines can bring with the ethical judgment that only humans can provide.

Where Algorithms Are Better and Where Human Judgment Is Necessary

Computer and data science have a long way to go to exercise the power of machine learning and data representation assumed here. The Department of Defense should continue to invest heavily in the research and development of modeling and simulation capabilities. However, as it does that, we propose that algorithms list the alternatives, define the relevant possible results, and give conditional probability estimates (the first three steps of rational decision-making), with occasional human inputs. The fourth step of determining value should remain the exclusive domain of human judgment.

Machines should generate alternatives and outcomes because they are best suited for the complexity and rule-based processing that those steps require. In the simplified example above there were only two possible alternatives (attack from close or far) with four possible outcomes (kill the missile and survive, kill the missile and dont survive, dont kill the missile and survive, and dont kill the missile and dont survive). The reality of future combat will, of course, be far more complicated. Machines will be better suited for handling this complexity, exploring numerous solutions, and illuminating options that warfighters may not have considered. This is not to suggest, though, that humans will play no role in these steps. Machines will need to make assumptions and pick starting points when generating alternatives and outcomes, and it is here that human creativity and imagination can help add value.

Machines are hands-down better suited for the third step estimating the probabilities of different outcomes. Human judgments of probability tend to rely on heuristics, such as how available examples are in memory, rather than more accurate indicators like relevant base rates, or how often a given event has historically occurred. People are even worse when it comes to understanding probabilities for a chain of events. Even a relatively simple combination of two conditional probabilities is beyond the reach of most people. There may be openings for human input when unrepresentative training data encodes bias into the resulting algorithms, something humans are better equipped to recognize and correct. But even then, the departures should be marginal, rather than the complete abandonment of algorithmic estimates in favor of intuition. Probability, like long division, is an arena best left to machines.

While machines take the lead with occasional human input in steps one through three, the opposite is true for the fourth step of making value trade-offs. This is because value trade-offs capture both ethical and military complexity, as many commanders already know. Even with perfect information (e.g., the mission will succeed but it will cost the pilots life) commanders can still find themselves torn over which decision to make. Indeed, whether and how one should make such trade-offs is the essence of ethical theories like deontology or consequentialism. And prioritization of which military objectives will most efficiently lead to success (however defined) is an always-contentious and critical part of military planning.

As long as commanders and operators remain responsible for trade-offs, they can maintain control and responsibility for the ethicality of the decision even as they become less involved in the other components of the decision process. Of note, this control and responsibility can be built into the utility function in advance, allowing systems to execute at machine speed when necessary.

A Way Forward

Incorporating machine learning and AI into military decision-making processes will be far from easy, but it is possible and a military necessity. China and Russia are using machine learning to speed their own decision-making, and unless the United States keeps pace it risks finding itself at a serious disadvantage on future battlefields.

The military can ensure the success of machine-aided choice by ensuring that the appropriate division of labor between human and machines is well understood by both decision-makers and technology developers.

The military should begin by expanding developmental education programs so that they rigorously and repeatedly cover decision science, something the Air Force has started to do in its Pinnacle sessions, its executive education program for two- and three-star generals. Military decision-makers should learn the steps outlined above, and also learn to recognize and control for inherent biases, which can shape a decision as long as there is room for human input. Decades of decision science research have shown that intuitive decision-making is replete with systematic biases like overconfidence, irrational attention to sunk costs, and changes in risk preference based merely on how a choice is framed. These biases are not restricted just to people. Algorithms can show them as well when training data reflects biases typical of people. Even when algorithms and people split responsibility for decisions, good decision-making requires awareness of and a willingness to combat the influence of bias.

The military should also require technology developers to address ethics and accountability. Developers should be able to show that algorithmically generated lists of alternatives, results, and probability estimates are not biased in such a way as to favor wanton destruction. Further, any system addressing targeting, or the pairing of military objectives with possible means of affecting those objectives, should be able to demonstrate a clear line of accountability to a decision-maker responsible for the use of force. One means of doing so is to design machine learning-enabled systems around the decision-making model outlined in this article, which maintains accountability of human decision-makers through their enumerated values. To achieve this, commanders should insist on retaining the ability to tailor value inputs. Unless input opportunities are intuitive, commanders and troops will revert to simpler, combat-tested tools with which they are more comfortable the same old radios or weapons or, for decision purposes, slide decks. Developers can help make probability estimates more intuitive by providing them in visual form. Likewise, they can make value trade-offs more intuitive by presenting different hypothetical (but realistic) choices to assist decision-makers in refining their value judgements.

The unenviable task of commanders is to imagine a number of potential outcomes given their particular context and assign a numerical score or utility such that meaningful comparisons can be made between them. For example, a commander might place a value of 1,000 points on the destruction of an enemy aircraft carrier and -500 points on the loss of a fighter jet. If this is an accurate reflection of the commanders values, she should be indifferent between an attack with no fighter losses and one enemy carrier destroyed and one that destroys two carriers but costs her two fighters. Both are valued equally at 1,000 points. If the commander strongly prefers one outcome over the other, then the points should be adjusted to better reflect her actual values or else an algorithm using that point system will make choices inconsistent with the commanders values. This is just one example of how to elicit trade-offs, but the key point is that the trade-offs need to be given in precise terms.

Finally, the military should pay special attention to helping decision-makers become proficient in their roles as appraisers of value, particularly with respect to decisions focused on whose life to risk, when, and for what objective. In the command-and-control paradigm of the future, decision-makers will likely be required to document such trade-offs in explicit forms so machines can understand them (e.g., I recognize there is a 12 percent chance that you wont survive this mission, but I judge the value of the target to be worth the risk).

If decision-makers at the tactical, operational, or strategic levels are not aware of or are unwilling to pay these ethical costs, then the construct of machine-aided choice will collapse. It will either collapse because machines cannot assist human choice without explicit trade-offs, or because decision-makers and their institutions will be ethically compromised by allowing machines to obscure the tradeoffs implied by their value models. Neither are acceptable outcomes. Rather, as an institution, the military should embrace the requisite transparency that comes with the responsibility to make enumerated judgements about life and death. Paradoxically, documenting risk tolerance and value assignment may serve to increase subordinate autonomy during conflict. A major advantage of formally modeling a decision-makers value trade-offs is that it allows subordinates and potentially even autonomous machines to take action in the absence of the decision-maker. This machine-aided decision process enables decentralized execution at scale that reflects the leaders values better than even the most carefully crafted rules of engagement or commanders intent. As long as trade-offs can be tied back to a decision-maker, then ethical responsibility lies with that decision-maker.

Keeping Values Preeminent

The Electronic Numerical Integrator and Computer, now an artifact of history, was the top secret that the New York Times revealed in 1946. Though important as a machine in its own right, the computers true significance lay in its symbolism. It represented the capacity for technology to sprint ahead of decision-makers, and occasionally pull them where they did not want to go.

The military should race ahead with investment in machine learning, but with a keen eye on the primacy of commander values. If the U.S. military wishes to keep pace with China and Russia on this issue, it cannot afford to delay in developing machines designed to execute the complicated but unobjectionable components of decision-making identifying alternatives, outcomes, and probabilities. Likewise, if it wishes to maintain its moral standing in this algorithmic arms race, it should ensure that value trade-offs remain the responsibility of commanders. The U.S. militarys professional development education should also begin training decision-makers on how to most effectively maintain accountability for the straightforward but vexing components of value judgements in conflict.

We stand encouraged by the continued debate and hard discussions on how to best leverage the incredible advancement in AI, machine learning, computer vision, and like technologies to unleash the militarys most valuable weapon system, the men and women who serve in uniform. The military should take steps now to ensure that those people and their values remain the key players in warfare.

Brad DeWees is a major in the U.S. Air Force and a tactical air control party officer. He is currently the deputy chief of staff for 9th Air Force (Air Forces Central). An alumnus of the Air Force Chief of Staffs Strategic Ph.D. program, he holds a Ph.D. in decision science from Harvard University. LinkedIn.

Chris FIAT Umphres is a major in the U.S. Air Force and an F-35A pilot. An alumnus of the Air Force Chief of Staffs Strategic Ph.D. program, he holds a Ph.D. in decision science from Harvard University and a Masters in management science and engineering from Stanford University. LinkedIn.

Maddy Tung is a second lieutenant in the U.S. Air Force and an information operations officer. A Rhodes Scholar, she is completing dual degrees at the University of Oxford. She recently completed an M.Sc. in computer science and began the M.Sc. in social science of the internet. LinkedIn.

The views expressed here are the authors alone and do not necessarily reflect those of the U.S. government or any part thereof.

Image: U.S. Air Force (Photo by Staff Sgt. Sean Carnes)

See the article here:
Machine Learning and Life-and-Death Decisions on the Battlefield - War on the Rocks

Machine Learning Tool Gives Early Warning of Cardiac Issues or Blood Clots in COVID Patients – HospiMedica

A team of biomedical engineers and heart specialists have developed an algorithm that warns doctors several hours before hospitalized COVID-19 patients experience cardiac arrest or blood clots.

The COVID-HEART predictor developed using data from patients treated for COVID-19 by scientists at the Johns Hopkins University (JHU; Baltimore, MD, USA) can forecast cardiac arrest in COVID-19 patients with a median early warning time of 18 hours and predict blood clots three days in advance. The machine-learning algorithm was built with more than 100 clinical data points, demographic information and laboratory results obtained from the JH-CROWN registry that Johns Hopkins established to collect COVID data from every patient in the hospital system. The scientists also added other variables reported by doctors on Twitter and from other pre-print papers.

The team did not anticipate that electrocardiogram data would play a critical role in the prediction of blood clotting. But once it was added, ECG data became one of the most accurate indicators for the condition. The next step for the researchers is to develop the best method for setting up the technology in hospitals to aid with the care of COVID-19 patients.

Its an early warning system to predict in real time these two outcomes in hospitalized COVID patients, said senior author Natalia Trayanova, the Murray B. Sachs professor of Biomedical Engineering and a professor of medicine. The continuously updating predictor can help hospitals allocate the appropriate resources and proper interventions to attain the best outcomes for patients.

The COVID-HEART predictor tool could help in the rapid triage of COVID-19 patients in the clinical setting especially when resources are limited, said Allison Hays, associate professor of medicine in the Johns Hopkins University School of Medicine and the projects main clinical collaborator. This may have implications for the treatment and closer monitoring of Covid-19 patients to help prevent these poor outcomes.

Related Links:Johns Hopkins University

Read the rest here:
Machine Learning Tool Gives Early Warning of Cardiac Issues or Blood Clots in COVID Patients - HospiMedica

Machine learning in human resources: how it works & its real-world applications – iTMunch

According to research conducted by Glassdoor, on average, the entire interview process conducted by companies in the United Stated usually takes about 22.9 days and the same in Germany, France and the UK takes 4-9 days longer [1]. Another research by the Society for Human Resources that studied data from more than 275,000 members in 160 countries found that the average time taken to fill a position is 42 days [2]. Clearly, hiring is a time-consuming and tedious process. Groundbreaking technologies like cloud computing, big data, augmented reality, virtual reality, blockchain technology and the Internet of Things can play a key role in making this process move faster. Machine learning in human resources is one such technology that has made the recruitment process not just faster but more effective.

Machine learning (ML) is treated as a subset of artificial intelligence (AI). AI is a branch of computer science which deals with building smart machines that are capable of performing certain tasks that typically require human intelligence. Machine learning, by definition, is the study of algorithms that enhance itself automatically over time with more data and experience. It is the science of getting machines (computers) to learn how to think and act like humans. To improve the learnings of a machine learning algorithm, data is fed into it over time in the form of observations and real-world interactions.The algorithms of ML are built on models based on sample or training data to make predictions and decisions without being explicitly programmed to do so.

Machine learning in itself is not a new technology but its integration with the HR function of organizations has been gradual and only recently started to have an impact. In this blog, we talk about how machine learning has contributed in making HR processes easier, how it works and what are its real-world applications. Let us begin by learning about this concept in brief.

The HR departments responsibilities with regards to recruitment used to be gathering and screening resumes, reaching out to candidates that fit the job description, lining up interviews and sending offer letters. It also includes managing a new employees on-boarding process and taking care of the exit process of an employee that decides to leave. Today, the human resource department is about all of this and much more. The department is now also expected to be able to predict employee attrition and candidate success, and this is possible through AI and machine learning in HR.

The objective behind integrating machine learning in human resource processes is the identification and automation of repetitive, time consuming tasks to free up the HR staff. By automating these processes, they can devote more time and resources to other imperative strategic projects and actual human interactions with prospective employees. ML is capable of efficiently handling the following HR roles, tasks and functions:

SEE ALSO:The Role of AI and Machine Learning in Affiliate Marketing

An HR professional keeps track of who saw the job posting and the job portal on which the applicant saw the posting. They collect the CVs and resumes of all the applicants and come up with a way to categorize the data in those documents. Additionally, they schedule, standardize and streamline the entire interview process. Moreover, they keep track of the social media activities of applicants along with other relevant data. All of this data collected by the HR professional is fed into a machine learning HR software from the first day itself. Soon enough, HR analytics in machine learning begins analyzing the data fed to discover and display insights and patterns.

The opportunities of learning through insights provided by machine learning HR are endless. The software helps HR professionals discover things like which interviewer is better at identifying the right candidate and which job portal or job posting attracts more or quality applicants.

With HR analytics and machine learning, fine-tuning and personalization of training is possible which makes the training experience more relevant to the freshly hired employee. It helps in identifying knowledge gaps or loopholes in training early on. It can also become a useful resource for company-related FAQs and information like company policies, code of conduct, benefits and conflict resolution.

The best way to better understand how machine learning has made HR processes more efficient is by getting acquainted with the real world applications of this technology. Let us have a look at some applications below.

SEE ALSO:The Importance of Human Resources Analytics

Scheduling is generally a time-demanding task. It includes coordinating with candidates and scheduling interviews, enhancing the onboarding experience, calling the candidates for follow-ups, performance reviews, training, testing and answering the common HR queries. Automating these tedious processes is one of the first applications of machine learning in human resource. ML takes away the burden of these cumbersome tasks from the HR staff by streamlining and automating it which frees up their time to focus on bigger issues at hand.A few of the best recruitment scheduling software are Beamery, Yello and Avature.

Once an HR professional is informed about the kind of talent that is needed to be hired in a company, one challenge is letting this information out and attracting the right set of candidates that might be fit for the role. Huge amount of companies trust ML for this task. Renowned job search platforms like LinkedIn and Glassdoor use machine learning and intelligent algorithms to help HR professionals filter and find out the best suitable candidates for the job.

Machine learning in human resources is also used to track new and potential applicants as they come into the system. A study was conducted by Capterra to look at how the use of recruitment software or applicant tracking software helped recruiters. It found 75% of the recruiters they contacted used some form of recruitment or applicant tracking software with 94% agreeing that it improved their hiring process. It further found that just 5% of recruiters thought that using applicant tracking software had a negative impact on their company [3].

Using such software also gives the HR professional access to predictive analytics which helps them analyze if the person would be best suitable for the job and a good fit for the company. Some of the best applicant tracking software that are available in the market are Pinpoint, Greenhouse and ClearCompany.

If hiring an employee is difficult, retaining an employee is even more challenging. There are factors in a company that make an employee stay or move to their next job. A study which was conducted by Gallup asked employees from different organizations if theyd leave or stay if certain perks were provided to them. The study found that 37% would quit their present job and take up a new job thatll allow them to work remotely part-time. 54% would switch for monetary bonuses, 51% for flexible working hours and 51% for employers offering retirement plans with pensions [4]. Though employee retention depends on various factors, it is imperative for an HR professional to understand, manage and predict employee attrition.

Machine learning HR tools provide valuable data and insights into the above mentioned factors and help HR professionals make decisions regarding employing someone (or not) more efficiently. By understanding this data about employee turnover, they are in a better position to take corrective measures well in advance to eliminate or minimize the issues.

An engaged employee is one who is involved in, committed to and enthusiastic about their work and workplace. The State of the Global Workforce report by Gallop found that 85% of the employees in the workplace are disengaged. Translation: Majority of the workforce views their workplace negatively or only does the bare minimum to get through the day, with little to no attachment to their work or workplace. The study further addresses why employee engagement is necessary. It found that offices with more engaged employees result in 10% higher customer metrics, 17% higher productivity, 20% more sales and 21% more profitability. Moreover, it found that highly engaged workplaces saw 41% less absenteeism [5].

Machine learning HR software helps the human resource department in making the employees more engaged. The insights provided by HR analytics by machine learning software help the HR team significantly in increasing employee productivity and reducing employee turnover rates. Software from Workometry and Glint aids immeasurable in measuring, analyzing and reporting on employee engagement and the general feeling towards their work.

The applications of machine learning in human resources we read above are already in use by HR professionals across the globe. Though the human element from human resources wont completely disappear, machine learning can guide and assist HR professionals substantially in ensuring the various functions of this department are well aligned and the strategic decisions made on a day-to-day basis are more accurate.

These are definitely exciting times for the HR industry and it is crucial that those working in this department are aware of the existing cutting-edge solutions available and the new trends that continue to develop.

The automation of HR functions like hiring & recruitment, training, development and retention has already made a profound positive effect on companies. Companies that refuse to or are slow to adapt and adopt machine learning and other new technologies will find themselves at a competitive disadvantage while those embrace them happily will flourish.

SEE ALSO:Future of Human Resource Management: HR Tech Trends of 2019

For more updates and latest tech news, keep reading iTMunch

Sources

[1] Glassdoor (2015) Why is Hiring Taking Longer, New Insights from Glassdoor Data [Online] Available from: https://www.glassdoor.com/research/app/uploads/sites/2/2015/06/GD_Report_3-2.pdf [Accessed December 2020]

[2] [Society for Human Resource Management (2016) 2016 Human Capital Benchmarking Report [Online] Available from: https://www.ebiinc.com/wp-content/uploads/attachments/2016-Human-Capital-Report.pdf [Accessed December 2020]

[3] Capterra (2015) Recruiting Software Impact Report [Online] Available from: https://www.capterra.com/recruiting-software/impact-of-recruiting-software-on-businesses [Accessed December 2020]

[4] Gallup (2017) State of the American Workplace Report [Online] Available from: https://www.gallup.com/workplace/238085/state-american-workplace-report-2017.aspx [Accessed December 2020]

[5] Gallup (2017) State of the Global Workplace [Online] Available from: https://www.gallup.com/workplace/238079/state-global-workplace-2017.aspx#formheader [Accessed December 2020]

Image Courtesy

Image 1: Background vector created by starline http://www.freepik.com

Image 2: Business photo created by yanalya http://www.freepik.com

Here is the original post:
Machine learning in human resources: how it works & its real-world applications - iTMunch

Top 10 AI and machine learning stories of 2020 – Healthcare IT News

Toward the tail end of pre-pandemic 2019, Mayo Clinic Chief Information Officer Cris Ross stood on a stage in California and declared, "This artificial intelligence stuff is real."

Indeed, while some may argue that AI and machine learning might have been harnessed better during the early days of COVID-19, and while the risk of algorithmic bias is very real, there's little question that artificial intelligence, evolving and maturing by the day for an array of use cases across healthcare.

Here are the most-read stories about AI during this most unusual year.

UK to use AI for COVID-19 vaccine side effects. On a day when vaccines, developed in record time, first begin to be administered in the U.S., it's worth remembering AI's crucial role in helping the world get to this hopefully pivotal moment.

AI algorithm IDs abnormal chest X-rays from COVID-19 patients. Machine learning has been a hugely valuable diagnostic tool as well, as illustrated by this story about a tool from cognitive computing vendor behold.ai that promises 'instant triage" based on lung scans offering faster diagnosis of COVID-19 patients and helping with resource allocation.

How AI use cases are evolving in the time of COVID-19. In a HIMSS20 Digital presentation, leaders from Google Cloud, Nuance and Health Data Analytics Institute shared perspective on how AI and automation were being deployed for pandemic response from the hunt for therapeutics and vaccines to analytics to optimize revenue cycle strategies.

Microsoft launches major $40M AI for Health initiative. The company said the the five-year AI for Health (part of its $165 million AI for Good initiative) will help healthcare organizations around the world deploy with leading edge technologies in the service of three key areas: accelerating medical research, improving worldwide understanding to protect against global health crises such as COVID-19 and reducing health inequity.

How AI and machine learning are transforming clinical decision support. "Todays digital tools only scratch the surface," said Mayo Clinic Platform President Dr. John Halamka. "Incorporating newly developed algorithms that take advantage of machine learning, neural networks, and a variety of other types of artificial intelligence can help address many of the shortcomings of human intelligence."

Clinical AI vendor Jvion unveils COVID Community Vulnerability Map. In the very early days of the pandemic, clinical AI company Jvion launched this intereactive map, which tracks the social determinants of health, helping identify populations down to the census-block level that are at risk for severe outcomes.

AI bias may worsen COVID-19 health disparities for people of color. An article in the Journal of the American Medical Informatics Association asserts that biased data models could further the disproportionate impact the COVID-19 pandemic is already having on people of color. "If not properly addressed, propagating these biases under the mantle of AI has the potential to exaggerate the health disparities faced by minority populations already bearing the highest disease burden," said researchers.

The origins of AI in healthcare, and where it can help the industry now. "The intersection of medicine and AI is really not a new concept," said Dr. Taha Kass-Hout, director of machine learning and chief medical officer at Amazon Web Services. (There were limited chatbots and other clinical applications as far back as the mid-60s.) But over the past few years, it has become ubiquitous across the healthcare ecosystem. "Today, if youre looking at PubMed, it cites over 12,000 publications with deep learning, over 50,000 machine learning," he said.

AI, telehealth could help address hospital workforce challenges. "Labor is the largest single cost for most hospitals, and the workforce is essential to the critical mission of providing life-saving care," noted a January American Hospital Association report on the administrative, financial, operational and clinical uses of artificial intelligence. "Although there are challenges, there also are opportunities to improve care, motivate and re-skill staff, and modernize processes and business models that reflect the shift toward providing the right care, at the right time, in the right setting."

AI is helping reinvent CDS, unlock COVID-19 insights at Mayo Clinic. In a HIMSS20 presentation, JohnHalamka shared some of the most promising recent clinical decision support advances at the Minnesota health system and described how they're informing treatment decisions for an array of different specialties and helping shape its understanding of COVID-19. "Imagine the power [of] an AI algorithm if you could make available every pathology slide that has ever been created in the history of the Mayo Clinic," he said. "That's something we're certainly working on."

Twitter:@MikeMiliardHITNEmail the writer:mike.miliard@himssmedia.comHealthcare IT News is a HIMSS publication.

Read the original post:
Top 10 AI and machine learning stories of 2020 - Healthcare IT News