Archive for the ‘Machine Learning’ Category

AIs J-curve and upcoming productivity boom – TechTalks

This article is part of our series that explores thebusiness of artificial intelligence

Digital technologies, and at their forefront artificial intelligence, are triggering fundamental shifts in society, politics, education, economy, and other fundamental aspects of life. These changes provide opportunities for unprecedented growth across different sectors of the economy. But at the same time, they entail challenges that organizations must overcome before they can tap into their full potential.

In a recent talk at an online conference organized by Stanford Human-Centered Artificial Intelligence (HAI), Stanford professor Erik Brynjolfsson discussed some of these opportunities and challenges.

Brynjolfsson, who directs Stanfords Digital Economy Lab, believes that in the coming decade, the use of artificial intelligence will be much more widespread than it is today. But its adoption will also face a period of lull, also known as the J-curve.

Theres a growing gap between what the technology is capable of and what it is already doing versus how we are responding to that, Brynjolfsson says. And thats where a lot of our societys biggest challenges and problems and some of our biggest opportunities lie.

According to Brynjolfsson, the next decade will see significantly higher productivity thanks to a wave of powerful technologiesespecially machine learningthat are finding their way into every computing device and application.

Advances in computer vision have been tremendous, especially in areas such as image recognition and medical imaging. Talking to phones, watches, and smart speakers has become commonplace thanks to advances in natural language processing and speech recognition. Product recommendation, ad placement, insurance underwriting, loan approval, and many other applications have benefited immensely from advances in machine learning.

In many areas, machine learning is reducing costs and accelerating production. For example, the application of large language models in programming can help software developers become much more productive and achieve more in less time.

In other areas, machine learning can help create applications that did not exist before. For example, generative deep learning models are creating new applications for arts, music, and other creative work. In areas such as online shopping, advances in machine learning can create major shifts in business models, such as moving from shopping-then-shipping to shipping-then-shopping.

The lockdowns and urgency caused by the covid-19 pandemic accelerated the adoption of these technologies in different sectors, including remote work tools, robotic process automation, powered drug research, and factory automation.

The pandemic has been horrific in so many ways, but another thing its done is its accelerated the digitization of the economy, compressing in about 20 weeks what would have taken maybe 20 years of digitization, Brynjolfsson says. Weve all invested in technologies that are allowing us to adapt to a more digital world. Were not going to stay as remote as we are now, but were not going all the way back either. And that increased digitization of business processes and skills compresses the timeframe for us to adopt these new ways of working and ultimately drive higher productivity.

The productivity potential of machine learning technologies has one big caveat.

Historically, when these new technologies become available, they dont immediately translate into productivity growth. Often theres a period where productivity declines, where theres a lull, Brynjolfsson says. And the reason theres this lull is that you need to reinvent your organizations, you need to develop new business processes.

Brynjolfsson calls this the Productivity J-Curve and has documented it in a paper published in the American Economic Journal: Macroeconomics. Basically, the great potential caused by new general-purpose technologies like the steam engine, electricity, and more recently machine learning requires fundamental changes in business processes and workflows, the co-invention of new products and business models, and investment in human capital.

These investments and changes often take several years, and during this period, they dont yield tangible results. During this phase, the companies are creating intangible assets, according to Brynjolfsson. For example, they might be training and reskilling their workforce to employ these new technologies. They might be redesigning their factories or instrumenting them with new sensor technologies to take advantage of machine learning models. They might need to revamp their data infrastructure and create data lakes on which they can train and run ML models.

These efforts might cost millions of dollars (or billions in the case of large corporations) and make no change in the companys output in the short term. At first glance, it seems that costs are increasing without any return on investment. When these changes reach their turning point, they result in a sudden increase in productivity.

Were in this period right now where were making a lot of that painful transition, restructuring work, and theres a lot of companies that are struggling with that, Brynjolfsson says. But were working through that, and these J-curves will lead to higher productivityaccording to our research, were near the bottom and turning up.

Unfortunately, adapting to AI and other new digital technologies does not run on a predictable path. Most firms arent making the transition correctly or lack the creativity and understanding to make the transition. Various studies show that most applied machine learning projects fail.

Only about the top 10-15 percent of firms are doing most of the investment in these intangibles. The other 85-90 percent of firms are lagging behind and are hardly making any of these restructuring needed, Brynjolfsson says. This is not just the big tech firms. This is within every industry, manufacturing, retail, finance, resources. In each category, were seeing the leading firms pulling away from the rest. Theres a growing performance gap.

But while adopting new technologies is going to be difficult, it is happening at a much faster pace in comparison to previous cycles of technological advances because we are better prepared to make the transition.

I think what is becoming clear is that its going to happen a lot faster in part because we have a much more professional class of people trying to study what works and what doesnt work, Brynjolfsson says. Some of them are in business schools and academia. A lot of them are in consulting companies. Some of them are journalists. And there are people who are describing which practices work and which dont.

Another element that can help immensely is the availability of machine learning and data science tools to process and study the huge amounts of data available on organizations, people, and the economy.

For example, Brynjolfsson and his colleagues are working on a big dataset of 200 million job postings, which include the full text of the job description along with other information. Using different machine learning models and natural language processing techniques, they can transform the job posts into numerical vectors that can then be used for various tasks.

We think of all the jobs as this mathematical space. We can understand how they can relate to each other, Brynjolfsson says.

For example, they can make simple inferences such as how similar or different two or more job posts are based on their text descriptions. They can use other techniques such as clustering and graph neural networks to draw more important conclusions such as what kind of skills are more in demand, or how would the characteristics of a job post change if you modified the description to add AI skills such as Python or TensorFlow. Companies can use these models to find holes in their hiring strategies or to analyze the hiring decisions of their competitors and leading organizations.

Those kinds of tools just didnt exist as recently as five years ago, and I think its a revolution that is just as important as the microscope or some of the other revolutions in science, Brynjolfsson says. We now have them for social sciences and business to have this kind of visibility. Thats allowing us to make a transition a lot more rapidly than before.

However, Brynjolfsson warns that not many companies are using these kinds of tools. This is perhaps further testament to his previous point that companies have not yet figured out the right transition strategy and are relying on old methods to restructure and adapt themselves to the age of AI. And at the center of this strategy should be the correct use of human capital.

You have hundreds of billions of dollars of human capital, of skills walking out the door, and then the company tries to hire back people with the skills that they need. What they dont realize is that the workers that they let go often had skills that were very adjacent to the ones theyre hiring for, Brynjolfsson says.

With the help of machine learning, they will have better visibility and knowledge of their skill adjacencies, Brynjolfsson says. For example, a company might discover that instead of laying off a bunch of people and looking to hire new talent, perhaps all they need to do is a little bit of retraining and repurposing of their workforce.

Its much more expensive to hire somebody fresh than would have been for them to take some of those people who are already in the company and say, if we teach you Python or customer service skills or other skills, you can be doing this job that were looking to hire people for, Brynjolfsson says. My hope is that, in the coming decade, workers will be in a much better position to take full advantage of their capabilities and skills. And it will be good for the companies too to understand all the assets that they have in there, and machine learning can help a lot with understanding those relationships.

Link:
AIs J-curve and upcoming productivity boom - TechTalks

Deploying machine learning to improve mental health | MIT News | Massachusetts Institute of Technology – MIT News

A machine-learning expert and a psychology researcher/clinician may seem an unlikely duo. But MITs Rosalind Picard and Massachusetts General Hospitals Paola Pedrelli are united by the belief that artificial intelligence may be able to help make mental health care more accessible to patients.

In her 15 years as a clinician and researcher in psychology, Pedrelli says it's been very, very clear that there are a number of barriers for patients with mental health disorders to accessing and receiving adequate care. Those barriers may include figuring out when and where to seek help, finding a nearby provider who is taking patients, and obtaining financial resources and transportation to attend appointments.

Pedrelli is an assistant professor in psychology at the Harvard Medical School and the associate director of the Depression Clinical and Research Program at Massachusetts General Hospital (MGH). For more than five years, she has been collaborating with Picard, an MIT professor of media arts and sciences and a principal investigator at MITs Abdul Latif Jameel Clinic for Machine Learning in Health (Jameel Clinic) on a project to develop machine-learning algorithms to help diagnose and monitor symptom changes among patients with major depressive disorder.

Machine learning is a type of AI technology where, when the machine is given lots of data and examples of good behavior (i.e., what output to produce when it sees a particular input), it can get quite good at autonomously performing a task. It can also help identify patterns that are meaningful, which humans may not have been able to find as quickly without the machine's help. Using wearable devices and smartphones of study participants, Picard and Pedrelli can gather detailed data on participants skin conductance and temperature, heart rate, activity levels, socialization, personal assessment of depression, sleep patterns, and more. Their goal is to develop machine learning algorithms that can intake this tremendous amount of data, and make it meaningful identifying when an individual may be struggling and what might be helpful to them. They hope that their algorithms will eventually equip physicians and patients with useful information about individual disease trajectory and effective treatment.

We're trying to build sophisticated models that have the ability to not only learn what's common across people, but to learn categories of what's changing in an individuals life, Picard says. We want to provide those individuals who want it with the opportunity to have access to information that is evidence-based and personalized, and makes a difference for their health.

Machine learning and mental health

Picard joined the MIT Media Lab in 1991. Three years later, she published a book, Affective Computing, which spurred the development of a field with that name. Affective computing is now a robust area of research concerned with developing technologies that can measure, sense, and model data related to peoples emotions.

While early research focused on determining if machine learning could use data to identify a participants current emotion, Picard and Pedrellis current work at MITs Jameel Clinic goes several steps further. They want to know if machine learning can estimate disorder trajectory, identify changes in an individuals behavior, and provide data that informs personalized medical care.

Picard and Szymon Fedor, a research scientist in Picards affective computing lab, began collaborating with Pedrelli in 2016. After running a small pilot study, they are now in the fourth year of their National Institutes of Health-funded, five-year study.

To conduct the study, the researchers recruited MGH participants with major depression disorder who have recently changed their treatment. So far, 48 participants have enrolled in the study. For 22 hours per day, every day for 12 weeks, participants wear Empatica E4 wristbands. These wearable wristbands, designed by one of the companies Picard founded, can pick up information on biometric data, like electrodermal (skin) activity. Participants also download apps on their phone which collect data on texts and phone calls, location, and app usage, and also prompt them to complete a biweekly depression survey.

Every week, patients check in with a clinician who evaluates their depressive symptoms.

We put all of that data we collected from the wearable and smartphone into our machine-learning algorithm, and we try to see how well the machine learning predicts the labels given by the doctors, Picard says. Right now, we are quite good at predicting those labels.

Empowering users

While developing effective machine-learning algorithms is one challenge researchers face, designing a tool that will empower and uplift its users is another. Picard says, The question were really focusing on now is, once you have the machine-learning algorithms, how is that going to help people?

Picard and her team are thinking critically about how the machine-learning algorithms may present their findings to users: through a new device, a smartphone app, or even a method of notifying a predetermined doctor or family member of how best to support the user.

For example, imagine a technology that records that a person has recently been sleeping less, staying inside their home more, and has a faster-than-usual heart rate. These changes may be so subtle that the individual and their loved ones have not yet noticed them. Machine-learning algorithms may be able to make sense of these data, mapping them onto the individuals past experiences and the experiences of other users. The technology may then be able to encourage the individual to engage in certain behaviors that have improved their well-being in the past, or to reach out to their physician.

If implemented incorrectly, its possible that this type of technology could have adverse effects. If an app alerts someone that theyre headed toward a deep depression, that could be discouraging information that leads to further negative emotions.Pedrelli and Picard are involving real users in the design process to create a tool thats helpful, not harmful.

What could be effective is a tool that could tell an individual The reason youre feeling down might be the data related to your sleep has changed, and the data relate to your social activity, and you haven't had any time with your friends, your physical activity has been cut down. The recommendation is that you find a way to increase those things, Picard says. The team is also prioritizing data privacy and informed consent.

Artificial intelligence and machine-learning algorithms can make connections and identify patterns in large datasets that humans arent as good at noticing, Picard says. I think there's a real compelling case to be made for technology helping people be smarter about people.

Read more from the original source:
Deploying machine learning to improve mental health | MIT News | Massachusetts Institute of Technology - MIT News

Debit: The Long Count review Mayans, machine learning and music – The Guardian

There is an uncanniness in listening to a musical instrument you have never heard being played for the first time. As your brain makes sense of a new sound, it tries to frame it within the realm of familiarity, producing a tussle between the known and unknown.

The second album from Mexican-American producer Delia Beatriz, AKA Debit, embraces this dissonance. Taking the flutes of the ancient Mayan courts as her raw material and inspiration, Beatriz used archival recordings from the Mayan Studies Institute at the Universidad Nacional Autnoma de Mxico to create a digital library of their sounds. She then processed these ancient samples through a machine-learning program to create woozy, ambient soundscapes.

Since no written music has survived from the Mayan civilisation, Beatriz crafts a new language for these ancient wind instruments, straddling the electronic world of her 2017 debut Animus and the dilatory experimentalism of ambient music. The resulting 10 tracks make for a deliciously strange listening experience.

Opener 1st Day establishes the undulating tones that unify the record. They flutter like contemplative humming and veer from acoustic warmth to metallic note-bending. Each track is given a numbered day and time, as if documenting the passage of a ritual, and echoes resonate down the record: whistles appear like sirens during the moans of 1st Night and 3rd Night; snatches of birdsong are tucked between the reverb of 2nd Day and 5th Day.

The Long Count of the records title seems to express the linear passage of time itself, one replicated in the eternal, fluid flute tones. We hear in them the warmth of the human breath that first produced their sound, as well as Beatrizs electronic filtering that extends their notes until they imperceptibly bleed into one another and fuzz like keys on a synth. It is a startlingly original and enveloping sound that leaves us with that ineffable feeling: the past unearthed and made new once more.

Korean composer Park Jiha releases her third album, The Gleam (tak:til), a solo work featuring uniquely sparse compositions of saenghwang mouth organ, piri oboe and yanggeum dulcimer. British-Ghanaian rapper KOG brings his debut LP, Zone 6, Agege (Heavenly Sweetness), a deeply propulsive mix of English, Pidgin and Ga lyrics set to Afrobeat fanfares. Cellist and composer Ana Carla Maza releases her latest album, Baha (Persona Editorial), an affecting combination of Cuban son, bossa and chanson in homage to the music of her birthplace of Havana.

See the article here:
Debit: The Long Count review Mayans, machine learning and music - The Guardian

Legal Issues That Might Arise with Machine Learning and AI – Legal Reader

While AI-enabled decision-making seems to take out the subjective human areas of bias and prejudice, many observers worry that machine analytics have the same or different biases embedded in the systems.

As with many advances in technology, the legal issues can be unsettled until a body of case law has been established. This is likely to be the case with artificial intelligence or AI. While legal scholars have already begun discussing the ramifications of this advance, the number of court cases, though growing, has been relatively meager up to this point.

Rapid Advances in AI

New and more powerful chips have the potential to accelerate many applications that rely on AI. This solves some of the impediments that have made advances in AI slower than some observers have anticipated. This speeds up the time it takes to train new machines and new models from months to just a few hours or even minutes. With better and faster chips for machine learning, the AI revolution can begin to reach its potential.

This potent advance will bring an array of important legal questions. This capability will usher in new ideas and techniques that will impact product development, analytics and more.

Important Impacts on Intellectual Property

While AI will impact many areas of the law, a fair share of its influence will be on areas of intellectual property. Certainly, areas of negligence, unfairness, bias, cyber security and other matters will be important, but some might wonder who owns the fruits of innovations that come from AI. In general, the patentability of computer-generated works has not been established, and the default is that the owner of the AI design is the owner of the new material. Since a computer cannot own personal property, at present, the right to intellectual property also does not exist.

More study and discussion will no doubt go into this area of law. This will become more pressing as technological advances will make it more difficult to identify the creator of certain products or innovations.

Increasing Applications in Medical Fields

The healthcare industry is also very much involved in harnessing the power associated with AI. Many of these applications involve routine tasks that are not likely to present overly complex legal concerns, although they could result in the displacement of workers. While the processing of paperwork and billing is already underway, the use of AI for imaging, diagnosis and data analysis is likely to increase in the coming years.

This could have legal implications when regarding cases that deal with medical malpractice. For example, could the creator of a system that is relied upon for an accurate diagnosis be sued if something goes wrong. While the potential is enormous, the possibility of error raises complicated questions when AI systems play a primary role.

Crucial Issues With Algorithmic Decision-Making

While AI-enabled decision-making seems to take out the subjective human areas of bias and prejudice, many observers worry that machine analytics have the same or different biases embedded in the systems. In many ways, these systems could discriminate against certain segments of society when it comes to housing or employment opportunities. These entail ethical questions that at some point will be challenged in a court of law.

The ultimate question is whether or not smart machines can outthink humans, or if they just contain the blind spots of the programmers. In a worst-case scenario, these embedded prejudices would be hard to combat, as they would come with the imprint of scientific progress. In other words, the biases would claim objectivity.

Some observers, though, believe that business practices have always been the arena for discrimination against certain workers. With AI, thoughtfully engaged and carefully calibrated, these practices could be minimized. It could offer more opportunities for a wider pool of individuals while minimizing the influence of favoritism.

The Legal Future of AI

As with other areas of the courts, AI issues will have to be slowly adjudicated in the court system. Certain decisions will establish court precedents that will gain a level of authority. Technological advances will continue to shape society and the international legal system.

See more here:
Legal Issues That Might Arise with Machine Learning and AI - Legal Reader

Research Engineer, Machine Learning job with NATIONAL UNIVERSITY OF SINGAPORE | 279415 – Times Higher Education (THE)

Job Description

Vessel Collision Avoidance System is a real-time framework to predict and prevent vessel collisions based on historical movement of vessels in heavy traffic regions such as Singapore strait. We are looking for talented developers to join our development team to help us develop machine learning and agent-based simulation models to quantify vessel collision risk at Singapore strait and port. If you are data curious, excited about deriving insights from data, and motivated by solving a real-world problem, we want to hear from you.

Qualifications

A B.Sc. in a quantitative field (e.g., Computer Science, Statistics, Engineering, Science) Good coding habit in Python and able to solve problems in a fast pace Familiar with popular machine learning models Eager to learn new things and has passion in work Take responsibility, team oriented, and result oriented The ability to communicate results clearly and a focus on driving impact

More Information

Location: Kent Ridge CampusOrganization: EngineeringDepartment : Industrial Systems Engineering And ManagementEmployee Referral Eligible: NoJob requisition ID : 7334

Read more from the original source:
Research Engineer, Machine Learning job with NATIONAL UNIVERSITY OF SINGAPORE | 279415 - Times Higher Education (THE)