Archive for the ‘Artificial Intelligence’ Category

Automation and AI sound similar, but may have vastly different impacts on the future of work – Brookings Institution

Last November, Brookings published a report on artificial intelligences impact on the workplace that immediately raised eyebrows. Many readers, journalists, and even experts were perplexed by the reports primary finding: that, for the most part, it is better-paid, better-educated white-collar workers who are most exposed to AIs potential economic disruption.

This conclusionby authors Mark Muro, Robert Maxim, and Jacob Whitonseemed to fly in the face of the popular understanding of technologys future effects on workers. For years, weve been hearing about how these advancements will force mainly blue-collar, lower-income workers out of jobs, as robotics and technology slowly consume those industries.

In an article about the November report, The Mercury News outlined this discrepancy: The study released Wednesday by the Brookings Institution seems to contradict findings from previous studiesincluding Brookings ownthat showed lower-skilled workers will be most affected by robots and automation, which can involve AI.

One of the previous studies that article refers to is likely Brookingss January 2019 report (also written by Muro, Maxim, and Whiton) titled Automation and Artificial Intelligence: How machines are affecting people and places. And indeed, in apparent contradiction of the AI report, the earlier study states, The impacts of automation in the coming decades will be variable across occupations, and will be visible especially among lower-wage, lower-education roles in occupations characterized by rote work.

So how do we square these two seemingly disparate conclusions? The key is in distinguishing artificial intelligence and automation, two similar-sounding concepts that nonetheless will have very different impacts on the future of work here in the U.S. and across the globe. Highlighting these distinctions is critical to understanding what types of workers are most vulnerable, and what we can do to help them.

The difference in how we define automation versus AI is important in how we judge their potential effects on the workplace.

Automation is a broad category describing an entire class of technologies rather than just one, hence much of the confusion surrounding its relationship to AI. Artificial intelligence can be a form of automation, as can robotics and softwarethree fields that the automation report focused on. Examples of the latter two forms could be machines that scurry across factory floors delivering parts and packages, or programs that automate administrative duties like accounting or payroll.

Automation substitutes human labor in tasks both physical and cognitiveespecially those that are predictable and routine. Think machine operators, food preparers, clerks, or delivery drivers. Activities that seem relatively secure, by contrast, include: the management and development of people; applying expertise to decisionmaking, planning and creative tasks; interfacing with people; and the performance of physical activities and operating machinery in unpredictable physical environments, the automation report specified.

In the more recent AI-specific report, the authors focused of the subset of AI known as machine learning, or using algorithms to find patterns in large quantities of data. Here, the technologys relevance to the workplace is less about tasks and more about intelligence. Instead of the routine, AI theoretically substitutes for more interpersonal duties such as human planning, problem-solving, or perception.

And what are some of the topline occupations exposed to AIs effects, according to Brookings research? Market research analysts and marketing specialists (planning and creative tasks, interfacing with people), sales managers (the management and development of people), and personal financial advisors (applying expertise to decisionmaking). The parallels between what automation likely wont affect and what AI likely will affect line up almost perfectly.

Machine learning is especially useful for prediction-based roles. Prediction under conditions of uncertaintyis a widespread and challenging aspect of many information-sector jobs in health, business, management, marketing, and education, wrote Muro, Maxim, and Whiton in a recent follow-up to their AI report. These predictive, mostly white-collar occupations seem especially poised for disruption by AI.

Some news outlets grasped this difference between the AI and the automation report. In The New York Timess Bits newsletter, Jamie Condliffe wrote: Previously, similar studies lumped together robotics and A.I. But when they are picked apart, it makes sense that A.I.which is about planning, perceiving and so onwould hit white-collar roles.

A very clear way to distinguish the impacts of the two concepts is to observe where Brookings Metro research anticipates those impacts will be greatest. The metros areas where automations potential is highest include blue-collar or service-sector-centric places such as Toledo, Ohio, Greensboro, N.C., Lakeland-Winter Haven, Fla. and Las Vegas.

The top AI-exposed metro area, by contrast, is the tech hub of San Jose, Calif., followed by other large cities such as Seattle and Salt Lake City. Places less exposed to AI, the report says, range from bigger, service-oriented metro areas such as El Paso, Texas, Las Vegas, and Daytona Beach, Fla., to smaller, leisure communities including Hilton Head and Myrtle Beach, S.C. and Ocean City, N.J.

AI will also likely have different impacts on different demographics than other forms of automation. In their report on the broader automation field, Muro, Maxim, and Whiton found that 47% of Latino or Hispanic workers are in jobs that couldin part or whollybe automated. American Indians had the next highest automation potential, at 45%, followed by Black workers (44%), white workers (40%), and Asian Americans (39%). Reverse that order, and youll come very close to the authors conclusion on AIs impact on worker demographics: Asian Americans have the highest potential exposure to AI disruption, followed by white, Latino or Hispanic, and Black workers.

For all of these differences, one important similarity does exist for both AI and broader automations impact on the workforce: uncertainty. Artificial intelligences real-world potential is clouded in ambiguity, and indeed, the AI report used the text of AI-based patents to attempt to foresee its usage in the workplace. The authors hypothesize that, far from taking over human work, AI may end up complementing labor in fields like medicine or law, possibly even creating new work and jobs as demand increases.

As new forms of automation emerge, it too could end up having any number of potential long-term impactsincluding, paradoxically, increasing demand and creating jobs. Machine substitution for labor improves productivity and quality and reduces the cost of goods and services, the authors write. This maythough not always, and not foreverhave the impact of increasing employment in these same sectors.

As policymakers draw up potential solutions to protect workers from technological disruption, its important to keep in mind the differences between advancements like AI and automation at largeand who, exactly, theyre poised to affect.

Link:
Automation and AI sound similar, but may have vastly different impacts on the future of work - Brookings Institution

Artificial intelligence requires trusted data, and a healthy DataOps ecosystem – ZDNet

Lately, we've seen many "x-Ops" management practices appear on the scene, all derivatives from DevOps, which seeks to coordinate the output of developers and operations teams into a smooth, consistent and rapid flow of software releases. Another emerging practice, DataOps, seeks to achieve a similarly smooth, consistent and rapid flow of data through enterprises. Like many things these days, DataOps is spilling over from the large Internet companies, who process petabytes and exabytes of information on a daily basis.

Such an uninhibited data flow is increasingly vital to enterprises seeking to become more data-driven and scale artificial intelligence and machine learning to the point where these technologies can have strategic impact.

Awareness of DataOps is high. A recent survey of 300 companies by 451 Research finds 72 percent have active DataOps efforts underway, and the remaining 28 percent are planning to do so over the coming year. A majority, 86 percent, are increasing their spend on DataOps projects to over the next 12 months. Most of this spending will go to analytics, self-service data access, data virtualization, and data preparation efforts.

In the report, 451 Research analyst Matt Aslett defines DataOps as "The alignment of people, processes and technology to enable more agile and automated approaches to data management."

The catch is "most enterprises are unprepared, often because of behavioral norms -- like territorial data hoarding -- and because they lag in their technical capabilities -- often stuck with cumbersome extract, transform, and load (ETL) and master data management (MDM) systems," according to Andy Palmer and a team of co-authors in their latest report,Getting DataOps Right, published by O'Reilly. Across most enterprises, data is siloed, disconnected, and generally inaccessible. There is also an abundance of data that is completely undiscovered, of which decision-makers are not even aware.

Here are some of Palmer's recommendations for building and shaping a well-functioning DataOps ecosystem:

Keep it open: The ecosystem in DataOps should resemble DevOps ecosystems in which there are many best-of-breed free and open source software and proprietary tools that are expected to interoperate via APIs." This also includes carefully evaluating and selecting from the raft of tools that have been developed by the large internet companies.

Automate it all:The collection, ingestion, organizing, storage and surfacing of massive amounts of data at as close to a near-real-time pace as possible has become almost impossible for humans to manage. Let the machines do it, Palmer urges. Areas ripe for automaton include "operations, repeatability, automated testing, and release of data." Look to the ways DevOps is facilitating the automation of the software build, test, and release process, he points out.

Process data in both batch and streaming modes. While DataOps is about real-time delivery of data, there's still a place -- and reason -- for batch mode as well. "The success of Kafka and similar design patterns has validated that a healthy next-generation data ecosystem includes the ability to simultaneously process data from source to consumption in both batch and streaming modes," Palmer points out.

Track data lineage: Trust in the data is the single most important element in a data-driven enterprise, and it simply may cease to function without it. That's why well-thought-out data governance and a metadata (data about data) layer is important. "A focus on data lineage and processing tracking across the data ecosystem results in reproducibility going up and confidence in data increasing," says Palmer.

Have layered interfaces. Everyone touches data in different ways. "Some power users need to access data in its raw form, whereas others just want to get responses to inquiries that are well formulated," Palmer says. That's why a layered set of services and design patterns is required for the different personas of users. Palmer says there are three approaches to meeting these multilayered requirements:

Business leaders are increasingly leaning on their technology leaders and teams to transform their organizations into data-driven digital entities that can react to events and opportunities almost instantaneously. The best way to accomplish this -- especially with the meager budgets and limited support that gets thrown out with this mandate -- is to align the way data flows from source to storage.

Continue reading here:
Artificial intelligence requires trusted data, and a healthy DataOps ecosystem - ZDNet

Adebayo Adeleke hosts Olusola Amusan, widely acclaimed as the Artificial Intelligence Evangelist on the third episode of Unfettered podcast. -…

In the third episode of the Unfettered Podcast, Adebayo Adeleke joined forces with the widely acclaimed Artificial Intelligence evangelist, Olusola Amusan to deliver one of the most profound conversations on Artificial Intelligence.

Aptly themed Artificial Intelligence in the 21st century, this episode skilfully introduces listeners to the world of Artificial Intelligence. What is more compelling about this episode is how Olusola reveals what individuals and governments can do to prepare for the disruption that Artificial Intelligence will bring in the coming decades.

According to Adebayo, the host, this episode is crucial because it gives listeners in-depth knowledge on automation, and the fourth industrial revolution and how these components will affect our everyday lives.

For more information about the podcast, visitwww.unfetteredpodcast.com. The episode is available onApple Music,Spotify,Google Podcasts,Castbox, andPodotron.

About Adebayo Adeleke

Adebayo Adeleke is an entrepreneur, retired U.S Army Major, and global thought leader. He is the Managing Partner at Pantote Solutions LLC (Dallas, TX), a Principal Partner, and Senior Supply Chain Consultant for Epot Consulting Limited and a Lecturer in Supply Chain Management at Sam Houston State University.

His unwavering desire to professionally mentor, and guide African immigrants led him to start the Rising Leadership Foundation, a 501(c) (3) non-profit organization that seeks to transform governance, and leadership using technology and mentoring in the Inner cities of Texas, African Immigrant communities and the continent of Africa.

For more information about Adebayo Adeleke and all his projects, kindly visitwww.adebayoadeleke.com/

About Adebayo Adeleke

Adebayo Adeleke is a dynamic thought leader with global insights on a broad array of issues. He is an entrepreneur, and retired U.S Army Major. He is the Managing Partner at Pantote Solutions LLC (Dallas, TX), a Principal Partner, and Senior Supply Chain Consultant for Epot Consulting Limited and a Lecturer in Supply Chain Management at Sam Houston State University.

His unwavering desire to professionally mentor, and guide African immigrants led him to start the Rising Leadership Foundation, a 501(c) (3) non-profit organization that seeks to transform governance, and leadership using technology and mentoring in the Inner cities of Texas, African Immigrant communities and the continent of Africa.

For more information about Adebayo Adeleke and all his projects, kindly visitwww.adebayoadeleke.com/

Excerpt from:
Adebayo Adeleke hosts Olusola Amusan, widely acclaimed as the Artificial Intelligence Evangelist on the third episode of Unfettered podcast. -...

Microsoft launches $40 million artificial intelligence initiative to advance global health research – seattlepi.com

Microsoft campus in Redmond.

Microsoft campus in Redmond.

Photo: Xinhua News Agency/Xinhua News Agency Via Getty Ima

Microsoft campus in Redmond.

Microsoft campus in Redmond.

Microsoft launches $40 million artificial intelligence initiative to advance global health research

Microsoft announced Wednesday that its newest $40 million investment in artificial intelligence (AI) will help advance global health initiatives, with two cash grants going to medical research at Seattle-based organizations.

As part of the tech giant's $165 million AI for Good initiative, this new public health branch will focus on three main areas: accelerating medical research around prevention and diagnosis of diseases, generating new insights about morality and global health crises, and improving health equity by increasing access to care for under-served populations.

"As a tech company, it is our responsibility to ensure that organizations working on the most pressing societal issues have access to our latest AI technology and the expertise of our technical talent," wrote John Kahan, Chief Data Analytics Officer at Microsoft in a company blog. "Through AI for Health, we will support specific nonprofits and academic collaboration with Microsofts leading data scientists, access to best-in-class AI tools and cloud computing, and select cash grants."

RELATED: Microsoft commits another $250 million for affordable housing. Here's where the money is going

One of the grants will go to Seattle Children's Hospital to continue their research on the causes and diagnosis of Sudden Infant Death Syndrome (SIDS). The Centers for Disease Control and Prevention estimated that 3,600 infants died in 2017 alone from SIDS.

Microsoft data scientists have already been working with researchers at Seattle Children's Hospital and discovered a correlation between maternal smoking and the fatal disease, estimating that 22 percent of the deaths from SIDS are attributed to smoking.

This research is personal for Kahan, who lost a son to SIDS.

"I saw firsthand, both personally and professionally, how you can marry artificial intelligence and medical research to advance this field, said Kahan in the program's launch event on Jan. 29. I saw because I lost my first son, and only son, to SIDS and I saw our head of data science partner with leading medical experts at Seattle Childrens and research institutes around the world."

Another grant will go towards Fred Hutchinson Cancer Research Center's Cascadia Data Discovery Initiative, which aims to accelerate cancer research by creating a data-sharing system for instructions and researchers across the Pacific Northwest to share biomedical data.

Other grants will benefit the Novartis Foundation for efforts to eliminate leprosy and Intelligent Retinal Imaging Systems to distribute diabetic retinopathy diagnostic software to prevent blindness.

These grants come as AI's rapidly growing role across industries is being debated by professionals, especially in medicine. Microsoft stated that less than 5% of AI professionals are operating in the health and nonprofit sector, leaving medical researchers with a shortage of talent and knowledge in the field.

RELATED:Feds release strategy for dealing with artificial intelligence - is it enough?

Technological innovations in AI are also moving faster than most doctors can prepare for. A recent study by Stanford Medicine found that only 7% of the 523 U.S. physicians surveyed thought they were "very prepared" to implement AI into their practice. The study called this a "transformation gap," citing that while most medical professionals can perceive the benefits of this technology for their patients, few feel prepared to adequately utilize it.

"Tomorrows clinicians not only need to be prepared to use AI, but they must also be ready to shape the technologys future development," the study states.

Other efforts in Microsoft's AI for Good initiative include AI for Earth, AI for Accessibility, AI for Cultural Heritage and AI for Humanitarian Action.

Read more from the original source:
Microsoft launches $40 million artificial intelligence initiative to advance global health research - seattlepi.com

NearShore Technology Talks About the Impact of Artificial Intelligence on Nearshoring – Science Times

(Photo : Bigstock) AI Learning and Artificial Intelligence Concept - Icon Graphic Interface showing computer, machine thinking and AI Artificial Intelligence of Digital Robotic Devices

The nearshoring technology industry is finding a rapid growth in demand from North American companies for engineers and data science services related to the advances that are coming through the ongoing artificial intelligence (AI) revolution. Companies find high value in working on new and sophisticated applications with nearshoring firms that are close in proximity, time zones, language, and business culture.

In recent years, the costs involved in offshoring have increased in relative comparison to nearshoring costs. Additionally, tech education opportunities in the Western hemisphere have become more advantageous. Western countries have far fewer holidays and lost workdays than in offshore countries as well. In this article, NearShore Technology examines current AI trends impacting nearshoring.

AI has been an active field for computer scientists and logicians for decades, and in recent years hardware and software capabilities have advanced to the stage allowing for the actual implementation of many AI processes. In general, AI describes the ability of a program and associated hardware to simulate human intelligence, reasoning, and analysis of real-world data. Logical algorithms are allowing for increased learning, logic, and creativity with AI processes. Increased technological capabilities are allowing AI to process information in quantities and with perceptive abilities that are beyond traditional human powers. Many industrial processes are finding great utility from machine learning, an AI-based process that allows technology systems to evolve and learn based on experience and self-development.

The huge tech companies that mainly focus on how customers use software programs are leading the way in AI development. Companies like Google, Amazon, and Facebook are positioning immense resources to advance their AI processes' abilities to understand and predict customer behavior. In addition to tech and retail firms, healthcare, financial services, and auto manufacturers (aiming at a future of autonomous cars) are all committing to developing effective AI tech. From routine activities such as customer support and billing to more intuition-based activities like investing and making strategic decisions, AI is becoming a central part of competing in almost every industry.

AI development requires experienced and skillful software engineers and programmers. The ability of an AI application to operate effectively is dependent first on the quantity and quality of data that it is provided. Algorithms must be able to perceive relevant data and also to learn and improve based on the data that is received. Programmers and engineers must be able to understand and facilitate algorithm improvement over time, as AI applications are never really completed and are constantly in development. Programmers must rely on a sufficient number of competent data scientists and analysts to sort and identify the nature and quality of information processed by an AI application to provide a meaningful understanding of how well the AI is functioning. The entire process is changing and progressing quickly, and the effectiveness of AI is determined by the abilities of the engineers and programmers who are involved.

Historically, many traditional IT services have been suited for offshoring. Most traditional IT and call center support services were routine, and the cost-efficiency of offshoring these processes around the world made economic sense in many situations. When skilled programming and data science are not a requirement, offshoring has had a place in the mix for many local companies. However, the worldwide shortage of skilled engineers and data scientists is most prevalent in the parts of the world normally used for offshore services.

Nearshoring AI technology development allows local companies to have meaningful and real-time relationships with programmers and data specialists who have the requisite skills needed. These nearshore relationships are vital to the ongoing nature of AI development.

Among the most important considerations of a successful nearshoring AI relationship are examining the actual skill and education of the nearshore firm's workers. A nearshore provider's team should be up to date with the latest technology developments and should have experience and a history of success in the relevant industry. As a process that depends on natural language use, it is important that AI developers are native or fluent speakers of the client company's language. Working with a nearshore firm that is proximately near in time and place also helps the firm to properly understand the culture and needs of a company's market and customers. A nearshore firm working on AI processes should feel like a complete partner and not just another outsourced provider of routine tasks.

NearShore Technology is a US firm headquartered in Atlanta with offices throughout North America. The company focuses on meeting all the technology needs of its clients. NearShore partners with technology officers and leaders to provide effective and timely solutions that fit each customer's unique needs. NearShore uses a family-based approach to provide superior IT, Medtech, Fintech, and related services to our customers and partners throughout North America.

View original post here:
NearShore Technology Talks About the Impact of Artificial Intelligence on Nearshoring - Science Times