Archive for the ‘Artificial Intelligence’ Category

Digital nursing 1: exploring the benefits and risks of artificial intelligence – Nursing Times

Artificial intelligence is already used in healthcare; this first article in a three-part series on digital healthcare looks at the benefits and risks

Artificial intelligence is already being used to support advanced clinical decisions, improve the accuracy and safety of care, and plan and manage NHS resources. It can make machines do things that used to require human intelligence, and can draw on huge amounts of data to make calculations that are beyond any human being. Artificial intelligence-enabled robots are being developed to take on some nursing functions. Nurses need to examine how their own roles may be changed and advocate for patient involvement in light of emerging technologies. They will also need training and support to feel confident using artificial intelligence tools. This first article in a series on digital healthcare examines the benefits and risks of artificial intelligence.

Citation: Agnew T (2022) Digital nursing 1: exploring the benefits and risks of artificial intelligence. Nursing Times [online]; 118: 8.

Author: Thelma Agnew is a freelance health journalist.

The ministerial forward to Joshi and Morleys (2019) report, published by NHSX, provided one key reason to be excited about artificial intelligence (AI) in healthcare: put simply, this technology can make the NHS even better at what it does: treating and caring for people. AI is exciting, but what is it, exactly? This first in a series of articles about digital healthcare will discuss the benefits and risks of AI.

Increasingly, nurses are encouraged to lead and shape emerging digital technologies, to ensure the changes made are fit for purpose and ward against unintended consequences, such as increased workloads, dehumanised care and the exclusion of already marginalised groups of people. The areas that could be improved, or even transformed by AI, according to Joshi and Morleys (2019) report, include:

Despite this, it is difficult to approach, let alone lead on, technological advancements if nurses do not understand them, and AI may feel too big to grasp at times.

There are many definitions in the ever-growing literature released about AI. Joshi and Morleys (2019) report suggested that one of the most useful definitions in the field of healthcare is also the oldest; they explained that it dates from a research project in 1955 and stated that AI is the science of making machines do things that would require intelligence if done by people.

A publication by The Kings Fund namely, Mistry (2020) gave a more detailed, but still straightforward explanation, stating that AI is an umbrella term encompassing a number of different approaches where software replicates functions that have, until recently, been synonymous with human intelligence. This includes a wide spectrum of abilities such as visually identifying and classifying objects [and] converting speech to text and text to speech.

The origins of AI go back decades, so why are we hearing so much about it now? One reason is that recent developments in applied mathematics and computer science have made computers much better at reading patterns in large amounts of complex data, releasing AIs potential (Mistry, 2020). The possibilities of AI are being further expanded by machine learning, which has been defined by Mistry (2020) as a type of artificial intelligence that enables computers to learn without being explicitly programmed, meaning they can teach themselves to change when exposed to new data.

Most health staff still lack direct experience with AI technologies, as is highlighted in Nix et als (2022) report, developed by NHS AI Lab and Health Education England (Box 1). However, the increasing use of AI technologies in nursing, such as providing information for advanced clinical decision support, is thought to be inevitable (Booth et al, 2021; Robert, 2019).

Box 1. AI technologies: a strange science is about to become more familiar

A survey of >1,000 NHS staff in the UK by The Health Foundation namely, Hardie et al (2021) found that three-quarters of respondents had heard, seen or read not very much or nothing at all about AI. The survey also found:

The Health Foundation survey identified fears among health workers that AI technologies present a threat to their jobs. This has been echoed in several other studies, along with concerns about data governance, cyber security, patient safety and fairness (Nix et al, 2022).

The reservations about AI are unlikely to put the brakes on their adoption in healthcare. Nix et als (2022) report points to evidence that use of AI is accelerating, with an increasing number of AI technologies expected to be used in healthcare in the next three years. It highlights the AI roadmap report by Health Education England and Unity Insights (2021), which surveyed more than 200 AI technologies: 20% were estimated to be ready for large-scale deployment in 2022, with an additional 40% ready in the next three years.

AI = artificial intelligence

AI-enabled decision support systems potentially provide numerous benefits; for example, they have already dramatically improved the detection of sepsis (Horng et al, 2017). However, there are also risks, as AI is only as good as its data. Nix et als (2022) report warns that confidence in artificial intelligence (AI) is not always desirable when using it for clinical decision making, and nurses need to recognise when to balance it with other sources of clinical information (Box 2).

Box 2. Confidence in using AI for clinical decision making

A recent report from NHS AI Lab and Health Education England recommends:

The main recommendation from the report is to develop educational pathways and materials for all health professionals to equip them to confidently evaluate, adopt and use AI (Nix et al, 2022).

AI = artificial intelligence

AI systems that evolve themselves may reflect or reinforce societal biases (for example, racial biases) and other inequities present in the data (Obermeyer et al, 2019; Gianfrancesco et al, 2018). It is important for nurses to be involved in innovations such as AI to make sure they are developing systems in line with ethical frameworks and to advocate for patient involvement (Booth et al, 2021). There is also a risk that AI systems that perform extremely well in controlled conditions will be less impressive in the real world, and there are unanswered questions about their safety and cost effectiveness in healthcare settings (Maguire et al, 2021).

The NHS is already using AI and machine learning, at a population level, to help identify older people in local areas who are at risk of frailty and adverse health outcomes; one example of this is the Electronic Frailty Index, which draws on data that is routinely recorded by GP practices (NHS England, 2017).

Predictive analytics in electronic patient records should also, increasingly, help doctors and nurses to diagnose and treat the individual patient in front of them (HEE, 2019). AI has also played a key role in informing the governments response to the coronavirus pandemic: the launch of the NHS Covid-19 Data Store by NHSX has aided the analysis of vast amounts of data to:

AI is also central to the governments new digital health and social care plan (Department of Health and Social Care and NHS England, 2022). This includes using AI to develop new diagnostics capacity to enable image-sharing and clinical decision support... These technologies support testing close to home, streamlining of pathways, triaging of waiting lists, faster diagnosis and levelling up under-served areas

With the development of smaller and more-sophisticated electronic components, robots embedded with AI will likely become more widely used in healthcare. (Mistry, 2020; HEE, 2019). The highly influential Topol review predicted that robots would become the hardware for AI, performing manual and cognitive tasks, and freeing up healthcare staff to spend more time doing things that are uniquely human, such as interacting with patients (HEE, 2019).

This picture is complicated by the fact that robots have also been developed to provide social and emotional support to people, arguably blurring the line between machines and humans. Examples currently in use include:

A common theme in the literature on AI and robotics in healthcare is the expectation that patients, as well as staff, will receive multiple benefits from the introduction of intelligent machines, with improvements in early diagnosis, and the accuracy and safety of care (Mistry, 2020; HEE, 2019). Unlike human healthcare workers, robots never get bored or tired, are unaffected by hazards in clinical settings, such as X-ray radiation, and can endlessly repeat tasks that require precision without a drop in performance (Mistry, 2020). The potential is there for robotics to help with everything from moving patients to surgical procedures that are beyond the capabilities of surgeons (Mistry, 2020).

With the input of nurses, robots are also being developed to take on some nursing functions, including:

Developments do not mean that nurses are about to be replaced by intelligent machines, but they do suggest that nursing, as it is currently understood, will change. A 2019 study by former American Nurses Association executive vice president Nancy Robert suggested that the arrival of telehealth and smart robots in peoples homes will see nursing evolve into more of a coaching role, guiding patients to improve their health and providing continuity of care, but still being physically present at the bedside when it really matters (Robert, 2019). A nursing dean quoted in the study said they could not imagine ever choosing a robot over a human to care for them if they were dying, and stated that: Nuances in human behaviour will keep nurses on the front line of care (Robert, 2019).

It is hoped that AI and robotics will work together as assistants to nurses by, on the one hand, supporting advanced clinical decisions and, on the other, automating basic tasks that are time consuming but could be performed by someone or something else. In this vision of an AI-enabled future, machines free up nurses professionally to use their education, skills and experience (Robert, 2019). A barrier to nurses using the technology to fulfil their potential could be other healthcare disciplines resistance to nurses practising at the top of their licence (Robert, 2019).

There is also the risk that, as AI tools become more widely used in healthcare, they will influence how nurses practise, without nurses having the opportunity to influence them. A 2021 study on how the nursing profession should adapt for a digital future called for an immediate inquiry into the influence of AI on nursing practice for the next 10 years and beyond (Booth et al, 2021). The authors pointed out that the increased use of AI is bringing with it new policy, regulatory, legal and ethical issues; they called on the nursing profession to:

Robert (2019) suggests that nurses have a responsibility to ask about the data used to train AI systems they use, and ensure they have been checked for bias.

As outlined in Box 3, nurses will need training and support to feel confident in, and overcome the barriers to, using AI tools which will only work properly if the ageing technology infrastructure of the NHS improves (Joshi and Morley, 2019). The excitement about AI is justified but it is important not to get dazzled by the hype (Joshi and Morley, 2019).

Box 3. AI and robotics in healthcare: barriers and learning

Health Education Englands (2019) Topol review identified significant barriers to the deployment of AI and robotics in the NHS. These included:

Along with a code of conduct and guidance on the effectiveness of the technologies, the review called for workforce learning in three key areas:

AI = artificial intelligence.

Booth RG et al (2021) How the nursing profession should adapt for a digital future. BMJ; 373: n1190.

Department of Health and Social Care, NHS England (2022) A plan for digital health and social care. gov.uk. 29 June (accessed 29 June 2022).

Gianfrancesco MA et al (2018) Potential biases in machine learning algorithms using electronic health record data. JAMA Internal Medicine; 178: 11, 1544-1547.

Gould M et al (2020) The power of data in a pandemic. digileaders.com, 15 April (accessed 21 June 2022).

Hardie T et al (2021) Switched On: How Do We Get the Best out of Automation and AI in Health Care? The Health Foundation.

Health Education England and Unity Insights (2021) AI Roadmap: Methodology and Findings Report. HEE

Health Education England (2019) The Topol Review: Preparing the Healthcare Workforce to Deliver the Digital Future. HEE.

Horng S et al (2017) Creating an automated trigger for sepsis clinical decision support at emergency department triage using machine learning. PLoS ONE; 12: 4, e0174708.

Joshi I, Morley J (2019) Artificial Intelligence: How to Get it Right Putting Policy into Practice for Safe Data-driven Innovation in Health and Care. NHSX.

Maguire D et al (2021) Shaping the Future of Digital Technology in Health and Social Care. The Kings Fund.

Mistry P (2020) The digital revolution: eight technologies that will change health and care. kingsfund.org.uk, 13 November (accessed 21 June).

NHS England (2017) Supporting Routine Frailty Identification and Frailty Through the GP Contract 2017/2018. NHS England.

Nix M et al (2022) Understanding Healthcare Workers Confidence in AI. NHS.

Obermeyer Z et al (2019) Dissecting racial bias in an algorithm used to manage the health of populations. Science; 366: 6464, 447-453.

Robert N (2019) How artificial intelligence is changing nursing. Nursing Management; 50:9, 30-39.

Don't miss more great clinical content from Nursing TimesNT Bitesize learning videos helping you to organise learning to fit in with your scheduleClinical zones keep up to date with articles in your clinical subject or nursing role/settingCPD zone user-friendly online learning units on fundamental aspects of nursingJournal Club clinical articles with discussion handouts for participatory CPDPractical Procedures 'how to' guides and teaching materials for clinical proceduresSelf-assessment clinical articles with linked online assessments for bitesize CPDSystems of Life applied anatomy and physiology to support your practice

Read more from the original source:
Digital nursing 1: exploring the benefits and risks of artificial intelligence - Nursing Times

How to Make Teachers Informed Consumers of Artificial Intelligence – Market Brief – EdWeek

New Orleans Artificial intelligences place in schools may be poised to grow, but school districts and companies have a long way to go before teachers buy into the concept.

At a session on the future of AI in school districts, held at the ISTE conference this week, a panel of leaders discussed its potential to shape classroom experiences and the many unresolved questions associated with the technology.

The mention of AI can intimidate teachers asits so often associated with complex code and sophisticated robotics. But AI is already a part of daily life in the way our phones recommend content to us or the ways that our smart home technology responds to our requests.

When AI is made relatable, thats when teachers buy into it, opening doors for successful implementation in the classroom, panelists said.

AI sounds so exotic right now, but it wasnt that long ago that even computer science in classrooms was blowing our minds, said Joseph South, chief learning officer for ISTE. South is a former director of the office of educational technology at the U.S. Department of Education.It doesnt matter how much we do out here. If the teacher doesnt believe in what youre bringing to the table, it will not be successful.Nneka McGee, South San Antonio Independent School District

The first step in getting educators comfortable with AI is to provide them the support to understand it, said Nancye Blair Black, ISTEs AI Explorations project lead, who moderated the panel. That kind of support needs to come from many sources, from federal officials down to the state level and individual districts.

We need to be talking about, What is AI? and it needs to be explained, she said. A lot of people think AI is magic, but we just need to understand these tools and their limitations and do more research to get people on board.

With the use of machine learning, AI technologies can adapt to individual students needs in real-time, tracking their progress and providing immediate feedback and data to teachers as well.

In instances where a student may be rushing through answering questions, AI technology can pick up on that and flag the student to slow down, the speakers said. This can provide a level of individual attention that cant be achieved by a teacher whos expected to be looking over every students shoulder simultaneously.

Others see reasons to be wary of AIs potential impact on teaching and learning. Many ed-tech advocates and academic researchers have raised serious concerns that the technology could have a negative impact on students.

One longstanding worry is that the data AI systems rely on can be inaccurate or even discriminatory, and that the algorithms put into AI programs make faulty assumptions about students and their educational interests and potential.

For instance, if AI is used to influence decisions about which lessons or academic programs students have access to, it could end up scuttling students opportunities, rather than enhancing them.

Nneka McGee, executive director for learning and innovation for the South San Antonio ISD, mentioned in the ISTE panel that a lot more research still has to be done on AI, regarding opportunity, data, and ethics.

Some districts that are more affluent will have more funding, so how do we provide opportunities for all students? she said.

We also need to look into the amount of data that is needed and collected for AI to run effectively. Your school will probably need a data- sharing agreement with the companies you work with.

A lot of research needs to be done on AIs data security and accessibility, as well as how to best integrate such technologies across the curriculum not just in STEM-focused courses.

Its important to start getting educators familiar with the AI and how it works, panelists said, because when used effectively, AI can increase student engagement in the classroom, and give teachers more time to customize lessons to individual student needs.

As AI picks up momentum within the education sphere, the speakers said that teachers need to start by learning the fundamentals of the technology and how it can be used in their classrooms.But a big share of the responsibilityalso falls on company officials developing new AI products, Black said.

When asked about advice for ed-tech organizations that are looking to expand into AI capabilities, Black emphasized the need for user-friendliness and an interface that can be seamlessly assimilated into existing curriculum and standards.

Hand [teachers] something they can use right away, not just another thing to pile on what they already have, she said.

McGee, of the South San Antonio ISD,urges companies to include teachers in every part of the process when it comes to pioneering AI.

Involve teachers because theyre on the front lines; theyre the first ones who see our students, she said. It doesnt matter how much we do out here. If the teacher doesnt believe in what youre bringing to the table, it will not be successful.

FollowEdWeek Market Briefon Twitter@EdMarketBriefor connect with us onLinkedIn.

Photo Credit: International Society for Technology in Education

See also:

More:
How to Make Teachers Informed Consumers of Artificial Intelligence - Market Brief - EdWeek

Editing Videos On the Cloud Using Artificial Intelligence – Entrepreneur

Opinions expressed by Entrepreneur contributors are their own.

You're reading Entrepreneur India, an international franchise of Entrepreneur Media.

VideoVerse was incepted to create exciting content with artificial intelligence technology and to make video editing democratic and accessible. The company's journey began in 2016 when Saket Dandotia, Alok Patil and Vinayak Shrivastav teamed up.

The trio wanted to create a technology that would disrupt the content industry. The solution they came up with was Magnifi, which along with Sytck and Illusto make up the ecosystem of VideoVerse.

"We truly believe that technology should help all content creators maximize their investments by not only telling better stories but also garnering a wider reach, seamless transition and efficient working solutions. We are constantly innovating to best suit consumer needs and industry demands," said Meghna Krishna, CRO, VideoVerse.

The company conducted market surveys and focused research on narrowing down the exact challenges it was solving for. The vision was to build a platform that allowed for accommodations and fine-tuning needed to suit every aspect of the production process as well as client requirements. The company created its platform by harnessing the power of AI and ML. It worked towards ensuring the application was precise and efficient. Sports was the first genre VideoVerse forayed into and the team researched over 30 key sports and parameters that could be meta-tagged to generate bite-sized videos.

"The urgent need for a technology solution to support the post-production processes and the demand for a solution that addressed every specific pain point in scaling content production became clear to us," added Krishna

Krishna believes that startups are the way forward for groundbreaking ideas and technologies to find a place in the enterprise world. There is tremendous scope for innovation and every new solution or idea only helps strengthen the community.

According to Forbes India, video creation and consumption space are growing at 24 per cent per annum and approximately 60 per cent of the internet users in India consume videos online.

Artificial Intelligence was a very new technology during VideoVerse's initial days which made it tougher to convince clients and investors. However, the company has raised $46.8 million in its recent Series B funding.

"There was a lot of ambiguity around the impact of AI and often the change from traditional methods to new age technology faces natural resistance. The challenge on hand was augmenting the existing awareness and educating end-users while ensuring that we had a seamless solution that did not disrupt the workflow," commented Krishna.

Videoverse and its distinct cloud-agnostic products use artificial intelligence (AI) and machine learning (ML) technology to revolutionize how content is refined and consumed. As far as specific stacks go:

For Magnifi, the key technologies used are face and image recognition, vision models, optical character recognition, audio detection and NLP. Styck and Illusto both use full-stack applications (MERN [Mongo, Express, React, Node]).

"Easy access to video editing platforms that offer state-of-the-art, next-generation solutions is the need of the hour. Being cloud-agnostic and powered by AI and ML all our platforms have a great user interface that allows anyone to master the art of video creation. There is a growing need for social-optimized content and our products are geared towards providing that with one-click solutions," added Krishna.

The company's focus is to strengthen the team, further enhance the product features and offer a complete holistic solution to its clients for all their video editing needs. VideoVerse has offices in the U.S., Europe, Israel, and India and is expanding to new markets like Singapore and the Middle East.

See original here:
Editing Videos On the Cloud Using Artificial Intelligence - Entrepreneur

The Fight Over Which Uses of Artificial Intelligence Europe Should Outlaw – WIRED

In 2019, guards on the borders of Greece, Hungary, and Latvia began testing an artificial-intelligence-powered lie detector. The system, called iBorderCtrl, analyzed facial movements to attempt to spot signs a person was lying to a border agent. The trial was propelled by nearly $5 million in European Union research funding, and almost 20 years of research at Manchester Metropolitan University, in the UK.

The trial sparked controversy. Polygraphs and other technologies built to detect lies from physical attributes have been widely declared unreliable by psychologists. Soon, errors were reported from iBorderCtrl, too. Media reports indicated that its lie-prediction algorithm didnt work, and the projects own website acknowledged that the technology may imply risks for fundamental human rights.

This month, Silent Talker, a company spun out of Manchester Met that made the technology underlying iBorderCtrl, dissolved. But thats not the end of the story. Lawyers, activists, and lawmakers are pushing for a European Union law to regulate AI, which would ban systems that claim to detect human deception in migrationciting iBorderCtrl as an example of what can go wrong. Former Silent Talker executives could not be reached for comment.

A ban on AI lie detectors at borders is one of thousands of amendments to the AI Act being considered by officials from EU nations and members of the European Parliament. The legislation is intended to protect EU citizens fundamental rights, like the right to live free from discrimination or to declare asylum. It labels some use cases of AI high-risk, some low-risk, and slaps an outright ban on others. Those lobbying to change the AI Act include human rights groups, trade unions, and companies like Google and Microsoft, which want the AI Act to draw a distinction between those who make general-purpose AI systems, and those who deploy them for specific uses.

Last month, advocacy groups including European Digital Rights and the Platform for International Cooperation on Undocumented Migrants called for the act to ban the use of AI polygraphs that measure things like eye movement, tone of voice, or facial expression at borders. Statewatch, a civil liberties nonprofit, released an analysis warning that the AI Act as written would allow use of systems like iBorderCtrl, adding to Europes existing publicly funded border AI ecosystem. The analysis calculated that over the past two decades, roughly half of the 341 million ($356 million) in funding for use of AI at the border, such as profiling migrants, went to private companies.

The use of AI lie detectors on borders effectively creates new immigration policy through technology, says Petra Molnar, associate director of the nonprofit Refugee Law Lab, labeling everyone as suspicious. You have to prove that you are a refugee, and you're assumed to be a liar unless proven otherwise, she says. That logic underpins everything. It underpins AI lie detectors, and it underpins more surveillance and pushback at borders.

Molnar, an immigration lawyer, says people often avoid eye contact with border or migration officials for innocuous reasonssuch as culture, religion, or traumabut doing so is sometimes misread as a signal a person is hiding something. Humans often struggle with cross-cultural communication or speaking to people who experienced trauma, she says, so why would people believe a machine can do better?

Read more from the original source:
The Fight Over Which Uses of Artificial Intelligence Europe Should Outlaw - WIRED

Skills or jobs that won’t be replaced by Automation, Artificial Intelligence in the future – Economic Times

In the high-tech fast-changing world, the nature of work also keeps changing. In the last few decades computers, robots and automation have changed the nature and roles of almost every job. Automation and artificial intelligence are spurring a new revolution, transforming jobs in every industry from IT to manufacturing.

According to some studies, about one fourth of the jobs are at risk of being automated across the globe. This trend sometimes makes people nervous about job security.

Increased adoption and evolution of Automation & Artificial Intelligence brings along skepticism of a large number of roles and skills displacement. Instead, automation and AI should be used to evolve job roles and help make human workers more effective, Arjun Jolly, Principal, Athena Executive Search & Consulting said.

Here are some skills and professions that cant be easily replaced by automation.Jobs involving high levels of human interaction, strategic interpretation, critical decision making, niche skills or subject matter expertise won't be replaced by automation anytime soon. For instance - Lawyers, Leadership roles, Medical Professionals, Healthcare practitioners, IT & HR Professionals. We can automate almost every part of the contract workflow process, but will still continue to rely on human intervention to put arguments, establish social relations in the negotiation phase, and find nuances in the data, rather than relying on data and algorithms outright, Arjun Jolly said.

Human Resource, Customer relationship management: While alexa or siri are great at following your every direction, but they cant really understand how youre feeling. Even the most advanced technology will never be able to comprehend our emotions and respond in the way that a human can. Whether its a team leader helping employees through a difficult time, account managers working with clients, or hiring managers looking for the perfect candidate, you need empathy to get those jobs done.

Roles that involve building relationships with clients, customers or patients can never be replaced by automation.

Automation will continue to take on more operational functions like payroll, filtering of job applications etc. But the human touch will always remain when it comes to HR. Similarly, even in the healthcare sector, automation and technology are playing an important role. But these need to work alongside humans doctors, surgeons, nurses, healthcare workers for diagnosis and treatment, Rupali Kaul, Operational Head-West, Marching Sheep said.

"Besides this, Psychologists, caregivers, most engineers, human resource managers, marketing strategists, and lawyers are some roles that cannot be replaced by AI anytime in the near future, Nilesh Jahagirdar, VP Marketing, [x]cube LABS said.

Strategic, Critical ThinkingAutomation can remove or simplify the process of implementing tasks but it cant provide an overarching strategy that makes each task relevant. Even as the world moves towards digitization and automation, the ability to understand the context and complexities before offering solutions remains irreplaceable.

Automation can help implement tasks but its a long way from providing a strategy that makes each task relevant that fits in the bigger picture. Regardless of industry, roles that require strategic thinking will always be done by humans.

"So, jobs like solutions architect, designers, professionals providing hospitality services, and consultants having the ability to integrate systems and processes, would remain much in demand, IMHO. In essence, skills with the ability to provide superlative customer experiences would be the skills of the future, Ruchika Godha, COO, Advaiya said.

Creativity Most intelligent computers or robot cant paint like Picaso and create music like Mozart. Nobody can explain why some humans are more creative than others. So, its safe to say its near impossible for computers to replicate the spark of creativity that has led to the worlds most amazing feats.

"Automation is programmed and cannot replicate creativity which is spontaneous and requires imagination, dreaming and collective inspiration something humans are best at Rupali Kaul, Operational Head-West, Marching Sheep said.

Nilesh Jahagirdar, VP Marketing, [x]cube LABS said, While digital technologies such as AI/ML are making quite a few routine jobs redundant, there are some which cant quite be replaced owing to the complexities involved and the fact that AI evolution is not just as magical as people think it is. At its current state, its only repetitive tasks that follow the same rules over and over which can be done by AI. Psychologists, caregivers, most engineers, human resource managers, marketing strategists, and lawyers are some roles that cannot be replaced by AI anytime in the near future.

See the article here:
Skills or jobs that won't be replaced by Automation, Artificial Intelligence in the future - Economic Times