Archive for the ‘Machine Learning’ Category

AI can predict signs of a heart attack within a year from a routine eye test – KTLA Los Angeles

LEEDS, United Kingdom (StudyFinds.org) An artificial intelligence system is capable of spotting whether someone will have a heart attack within the next year through a routine eye scan.

A team from the University of Leeds believes this AI tool opens the door to a cheap and simple screening program for the worlds No. 1 killer. Their tests find the computer can predict patients at risk of a heart attack in the next 12 months with up to 80% accuracy. The breakthrough adds to evidence that our eyes are not just windows to the soul, but windowsto overall healthas well.

Cardiovascular diseases, including heart attacks, are the leading cause of early death worldwide and the second-largest killer in the UK. This causes chronic ill-health and misery worldwide, project supervisor Professor Alex Frangi says in auniversity release.

This technique opens-up the possibility of revolutionizing the screening of cardiac disease. Retinal scans are comparatively cheap and routinely used in many optician practices. As a result of automated screening, patients who are at high risk of becoming ill could be referred for specialist cardiac services, Frangi adds.

The system could also be used to track earlysigns of heart disease.

The retina is a small membrane at the back of the eye containing light sensitive cells. Doctors have found that changes to the tiny blood vessels canhint at vascular disease, including heart problems.

Study authors used an advanced type of AI, known as deep learning, to teach the machine to automatically read more than 5,000 eye scans. The scans come from routine eye tests during visits to opticians or eye clinics. All of the participants are part of the UK Biobank, which tracks the health of half a million adults.

Deep learning is a complex series of algorithms that enable machines to make forecasts based on patterns in data. The technique, described in the journalNature Machine Intelligence, could revolutionize heart therapy, according to the researchers.

The AI system has the potential to identify individuals attending routine eye screening who are at higher future risk of cardiovascular disease, whereby preventative treatments could be started earlier to prevent premature cardiovascular disease, says co-author Professor Chris Gale, a consultant cardiologist at Leeds Teaching Hospitals NHS Trust.

The study identified associations between pathology in the retina andchanges in the patients heart. Once the system learned each image pattern, the AI could estimate the size and pumping efficiency of the left ventricle from retinal scans alone.

This is one of the hearts four chambers. An enlarged ventricle can increase the risk of heart disease. The computer combined the estimated size of the left ventricle and its pumping efficiency with data like age and sex.

Currently, doctors determine this information using an MRI (magnetic resonance imaging) or echocardiography scans of the heart. The diagnostic tests are expensive and are often only available in a hospital. The tests can be inaccessible for many people in countries with lesser health care systems. They also increase health care costs and waiting times in wealthy nations.

The AI system is an excellent tool for unravelling the complex patterns that exist in nature, and that is what we have found the intricate pattern of changes in the retina linked to changes in the heart, adds co-author Sven Plein of the British Heart Foundation.

A recent study discovered a similar link between biologicalaging of the retina and mortality. Those with a retina older than their actual age were up to 67% more likely to die over the next decade.

South West News Service writer Mark Waghorn contributed to this report.

See more here:
AI can predict signs of a heart attack within a year from a routine eye test - KTLA Los Angeles

Senior Research Associate in Machine Learning job with UNIVERSITY OF NEW SOUTH WALES | 279302 – Times Higher Education (THE)

Work type:Full-timeLocation:Canberra, ACTCategories:Lecturer

UNSW Canberra is a campus of the University of New South Wales located at the Australian Defence Force Academy in Canberra. UNSW Canberra endeavours to offer staff a rewarding experience and offers many opportunities and attractive benefits, including:

At UNSW, we pride ourselves on being a workplace where the best people come to do their best work.

The School of Engineering and Information Technology (SEIT) offers a flexible, friendly working environment that is well-resourced and delivers research-informed education as part of its accredited, globally recognised engineering and computing degrees to its undergraduate students. The School offers programs in electrical, mechanical, aeronautical, and civil engineering as well as in aviation, information technology and cyber security to graduates and professionals who will be Australias future technology decision makers.

We are seeking a person for the role of Postdoctoral Researcher / Senior Research Fellow in the area of machine learning.

About the Role:

Role:Postdoctoral Researcher / Senior Research FellowSalary:Level B:$110,459 - $130,215 plus 17% SuperannuationTerm:Fixed-term, 12 Months, Full-time

About the Successful Applicants

To be successful in this role you will have:

In your application you should submit a 1-page document outlining how you meet the Skills and Experience outlined in the Position Description.Please clearly indicate the level you are applying for.

In order to view the Position Description please ensure that you allow pop-ups for Jobs@UNSW Portal.

The successful candidate will be required to undertake pre-employment checks prior to commencement in this role. The checks that will be undertaken are listed in the Position Description. You will not be required to provide any further documentation or information regarding the checks until directly requested by UNSW.

The position is located in Canberra, ACT. The successful candidate will be required to work from the UNSW Canberra campus.To be successful you will hold Australian Citizenship and have the ability to apply for a Baseline Security Clearance. Visa sponsorship is not available for this appointment.

For further information about UNSW Canberra, please visit our website:UNSW Canberra

Contact:Timothy Lynar, Senior Lecturer

E: t.lynar@adfa.edu.au

T: 02 51145175

Applications Close:13 February 2022 11:30PM

Find out more about working atUNSW Canberra

At UNSW Canberra, we celebrate diversity and understand the benefits that inclusion brings to the university. We aim to ensure thatour culture, policies, and processes are truly inclusive. We are committed to developing and maintaining a workplace where everyone is valued and respected for who they are and supported in achieving their professional goals. We welcome applications from Aboriginal and Torres Strait Islander people, Women at all levels, Culturally and Linguistically Diverse People, People with Disability, LGBTIQ+ People, people with family and caring responsibilities and people at all stages of their careers. We encourage everyone who meets the selection criteria and shares our commitment to inclusion to apply.

Any questions about the application process - please emailunswcanberra.recruitment@adfa.edu.au

Excerpt from:
Senior Research Associate in Machine Learning job with UNIVERSITY OF NEW SOUTH WALES | 279302 - Times Higher Education (THE)

Autonomy in Action: These Machines Bring Imagination to Life – Agweb Powered by Farm Journal

By Margy Eckelkamp and Katie Humphreys

Machinery has amplified the workload farmers can accomplish, and technology has delivered greater efficiencies. Now, autonomy is poised to introduce new levels of productivity and fun.

Different than its technology cousins of guidance and GPS-enabled controls, autonomy relocates the operator to anywhere but the cab.

True autonomy is taking off the training wheels, says Steve Cubbage, vice president of services for Farmobile. It doesnt require human babysitting. Good autonomy is prefaced on good data and lots of it.

As machines are making decisions on the fly, companies seek to enable them to provide the quality and consistency expected by the farmer.

We could see mainstream adoption in five to 10 years. It might surprise us depending on how far we advance artificial intelligence (AI), data collection, etc., Cubbage says. Dont say it cant happen in a short time, because it can. Autosteer was a great example of quick and unexpected acceptance.

Learn more about the robots emerging on the horizon.

The NEXAT is an autonomous machine, ranging from 20' to 80', that can be used for tillage, planting, spraying and harvesting. The interchangeable implements are mounted between four electrically driven tracks.Source: NEXAT

The idea and philosophy behind the NEXAT is to enable a holistic crop production system where 95% of the cultivated area is free of soil compaction, says Lothar Fli, who works in marketing for NEXAT. This system offers the best setup for carbon farming in combination with the possibility for regenerative agriculture and optimal yield potential.

The NEXAT system carries the modules, rather than pulls them, as Fli describes, which allowed the company to develop a simpler and lighter machine that delivers 50% more power with 40% less weight. In operation, weight is transferred onto the carrier vehicle and large tracks and optimized so it becomes a self-propelled machine.

This enables the implements to be guided more accurately and with less slip, reducing fuel consumption and CO2 emissions more than 30%, he says. Because the NEXAT carries the implement, theres not an extra chassis with extra wheels. The setup creates the best precision at a high working width that reduces soil compaction on the growing areas.

In the field, the machine is driven horizontally but rotates 90 for road travel. Two independent 545-hp diesel engines supply power. The cab, which can rotate 270, is the basis for fully automated operation but enables manual guidance.

The tillage and planting modules came from Vderstad, a Swedish company. The CrossCutter disks for tillage and Tempo planter components are no different than whats found on traditional Vderstad implements.

The crop protection modules, which work like a conventional self-propelled sprayer, come from the German company Dammann. The sprayer has a 230' boom, with ground clearance up to 6.5', and a 6,340-gal. tank.

The NexCo combine harvester module achieves grain throughputs of 130 to 200 tons per hour.

A 19' long axial rotor is mounted transverse to the direction of travel and the flow of harvested material is introduced centrally into the rotor and at an angle to achieve energy efficiency. The rotor divides it into two material flows, which according to NEXAT, enables roughly twice the threshing performance of conventional machines. Two choppers provide uniform straw and chaff distribution, even with a 50' cutting width.

The grain hopper holds 1,020 bu. and can be unloaded in a minute. See the NEXAT system in action.

At the Consumer Electronics Show, John Deere introduced its full autonomy solution for tractors, which will be available to farmers later in 2022.Its tractors are outfitted with:

Farmers can control machines remotely via the JD Operations Center app on a phone, tablet or computer.

Unlike autonomous cars, tractors need to do more than just be a shuttle from point A to point B, says Deanna Kovar, product strategy at John Deere.

When tractors are going through the field, they have to follow a very precise path and do very specific jobs, she says. An autonomous 8R tractor is one giant robot. Within 1" of accuracy, it is able to perform its job without human intervention.

Artificial intelligence and machine learning are key technologies to John Deeres vision for the future, says Jahmy Hindman, John Deeres chief technology officer. In the past five years the company has acquired two Silicon Valley technology startups: Blue River Technology and Bear Flag Robotics.

This specific autonomy product has been in development for at least three years as the John Deere team collected images for its machine learning library. Users have access to live video and images via the app.

The real-time delivery of performance information is critical, John Deere highlights, to building the trust of the systems performance.

For example, Willy Pell, John Deere senior director of autonomous systems, explains even if the tractor encounters an anomaly or an undetectable object, safety measures will stop the machine.

While the initial introduction of the fully autonomous tractor showed a tillage application, Jorge Heraud, John Deere vice president of automation and autonomy, shares three other examples of how the company is bringing forward new solutions:

See the John Deere autonomous tractor launch.

New Holland has developed the first chopped material distribution system with direct measurement technology: the OptiSpread Automation System. 2D radar sensors mounted on both sides of the combine measure the speed and throw of the chopped material. If the distribution pattern no longer corresponds to the nominal distribution pattern over the entire working width, the rotational speed of the hydraulically driven feed rotors increases or decreases until the distribution pattern once again matches. The technology registers irregular chopped material distribution, even with a tailwind or headwind, and produces a distribution map.

The system received a Agritechnica silver innovation award.Source: CNH

As part of Vermeers 50th anniversary celebration in 2021, a field demonstration was held at its Pella, Iowa, headquarters to unveil their autonomous bale mover. The BaleHawk navigates through a field via onboard sensors to locate bales, pick them up and move them to a predetermined location.

With the capacity to load three bales at a time, the BaleHawk was successfully tested with bales weighing up to 1,300 lb. The empty weight of the vehicle is less than 3 tons. Vermeer sees the lightweight concept as a solution to reduce compaction.

See the Vermeer Bale Hawk in action.Source: Vermeer

In April 2021, Philipp Horsch, with German farm machinery manufacturer Horsch Machinen, tweeted about its Robo autonomous planter. He said the machine was likely to be released for sale in about two years, depending on efforts to change current regulations, which state for fully autonomous vehicle use in Germany, a person must stay within 2,000' to watch the machine.

The Horsch Robo is equipped with a Trimble navigation system and fitted with a large seed hopper. See the system in action.Source: Horsch

Katie Humphreys wears the hat of content manager for the Producer Media group. Along with writing and editing, she helps lead the content team and Test Plot efforts.

Margy Eckelkamp, The Scoop Editor and Machinery Pete director of content development, has reported on machinery and technology since 2006.

Read this article:
Autonomy in Action: These Machines Bring Imagination to Life - Agweb Powered by Farm Journal

Reinforcement learning for the real world – TechTalks

This article is part of ourreviews of AI research papers, a series of posts that explore the latest findings in artificial intelligence.

Labor- and data-efficiency remain two of the key challenges of artificial intelligence. In recent decades, researchers have proven that big data and machine learning algorithms reduce the need for providing AI systems with prior rules and knowledge. But machine learningand more recently deep learninghave presented their own challenges, which require manual labor albeit of different nature.

Creating AI systems that can genuinely learn on their own with minimal human guidance remain a holy grail and a great challenge. According to Sergey Levine, assistant professor at the University of California, Berkeley, a promising direction of research for the AI community is self-supervised offline reinforcement learning.

This is a variation of the RL paradigm that is very close to how humans and animals learn to reuse previously acquired data and skills, and it can be a great boon for applying AI to real-world settings. In a paper titled Understanding the World Through Action and a talk at the NeurIPS 2021 conference, Levine explained how self-supervised learning objectives and offline RL can help create generalized AI systems that can be applied to various tasks.

One common argument in favor of machine learning algorithms is their ability to scale with the availability of data and compute resources. Decades of work on developing symbolic AI systems have produced limited results. These systems require human experts and engineers to manually provide the rules and knowledge that define the behavior of the AI system.

The problem is that in some applications, the rules can be virtually limitless, while in others, they cant be explicitly defined.

In contrast, machine learning models can derive their behavior from data, without the need for explicit rules and prior knowledge. Another advantage of machine learning is that it can glean its own solutions from its training data, which are often more accurate than knowledge engineered by humans.

But machine learning faces its own challenges. Most ML applications are based on supervised learning and require training data to be manually labeled by human annotators. Data annotation poses severe limits to the scaling of ML models.

More recently, researchers have been exploring unsupervised and self-supervised learning, ML paradigms that obviate the need for manual labels. These approaches have helped overcome the limits of machine learning in some applications such as language modeling and medical imaging. But theyre still faced with challenges that prevent their use in more general settings.

Current methods for learning without human labels still require considerable human insight (which is often domain-specific!) to engineer self-supervised learning objectives that allow large models to acquire meaningful knowledge from unlabeled datasets, Levine writes.

Levine writes that the next objective should be to create AI systems that dont require manual labeling or the manual design of self-supervised objectives. These models should be able to distill a deep and meaningful understanding of the world and can perform downstream tasks with robustness generalization, and even a degree of common sense.

Reinforcement learning is inspired by intelligent behavior in animals and humans. Reinforcement learning pioneer Richard Sutton describes RL as the first computational theory of intelligence. An RL agent develops its behavior by interacting with its environment, weighing the punishments and rewards of its actions, and developing policies that maximize rewards.

RL, and more recently deep RL, have proven to be particularly efficient at solving complicated problems such as playing games and training robots. And theres reason to believe reinforcement learning can overcome the limits of current ML systems.

But before it does, RL must overcome its own set of challenges that limit its use in real-world settings.

We could think of modern RL research as consisting of three threads: (1) getting good results in simulated benchmarks (e.g., video games); (2) using simulation+ transfer; (3) running RL in the real world, Levine told TechTalks. I believe that ultimately (3) is the most importantthing, because thats the most promising approach to solve problems that we cant solve today.

Games are simple environments. Board games such as chess and go are closed worlds with deterministic environments. Even games such as StarCraft and Dota, which are played in real-time and have near unlimited states, are much simpler than the real world. Their rules dont change. This is partly why game-playing AI systems have found very few applications in the real world.

On the other hand, physics simulators have seen tremendous advances in recent years. One of the popular methods in fields such as robotics and self-driving cars has been to train reinforcement learning models in simulated environments and then finetune the models with real-world experience. But as Levine explained, this approach is limited too because the domains where we most need learningthe ones where humans far outperform machinesare also the ones that are hardest to simulate.

This approach is only effective at addressing tasks that can be simulated, which is bottlenecked by our ability to create lifelike simulated analogues of the real world and to anticipate all the possible situations that an agent might encounter in reality, Levine said.

One of the biggest challenges we encounter when we try to do real-world RL is generalization, Levine said.

For example, in 2016, Levine was part of a team that constructed an arm farm at Google with 14 robots all learning concurrently from their shared experience. They collected more than half a million grasp attempts, and it was possible to learn effective grasping policies in this way.

But we cant repeat this process for every single task we want robots to learn with RL, he says. Therefore, we need more general-purpose approaches, where a single ever-growing dataset is used as the basis for a general understanding of the world on which more specific skills can be built.

In his paper, Levine points to two key obstacles in reinforcement learning. First, RL systems require manually defined reward functions or goals before they can learn the behaviors that help accomplish those goals. And second, reinforcement learning requires online experience and is not data-driven, which makes it hard to train them on large datasets. Most recent accomplishments in RL have relied on engineers at very wealthy tech companies using massive compute resources to generate immense experiences instead of reusing available data.

Therefore, RL systems need solutions that can learn from past experience and repurpose their learnings in more generalized ways. Moreover, they should be able to handle the continuity of the real world. Unlike simulated environments, you cant reset the real world and start everything from scratch. You need learning systems that can quickly adapt to the constant and unpredictable changes to their environment.

In his NeurIPS talk, Levine compares real-world RL to the story of Robinson Crusoe, the story of a man who is stranded on an island and learns to deal with unknown situations through inventiveness and creativity, using his knowledge of the world and continued exploration in his new habitat.

RL systems in the real world have to deal with a lifelong learning problem, evaluate objectives and performance based entirely on realistic sensing without access to privileged information, and must deal with real-world constraints, including safety, Levine said. These are all things that are typically abstracted away in widely used RL benchmark tasks and video game environments.

However, RL does work in more practical real-world settings, Levine says. For example, in 2018, he and his colleagues an RL-based robotic grasping system attain state-of-the-art results with raw sensory perception. In contrast to static learning behaviors that choose a grasp point and then execute the desired grasp, in their method, the robot continuously updated its grasp strategy based on the most recent observations to optimize long-horizon grasp success.

To my knowledge this is still the best existing system for grasping from monocular RGB images, Levine said. But this sort of thing requires algorithms that are somewhat different from those that perform best in simulated video game settings: it requires algorithms that are adept at utilizing and reusing previously collected data, algorithms that can train large models that generalize, and algorithms that can support large-scale real-world data collection.

Levines reinforcement learning solution includes two key components: unsupervised/self-supervised learning and offline learning.

In his paper, Levine describes self-supervised reinforcement learning as a system that can learn behaviors that control the world in meaningful ways and provides some mechanism to learn to control [the world] in as many ways as possible.

Basically, this means that instead of being optimized for a single goal, the RL agent should be able to achieve many different goals by computing counterfactuals, learning causal models, and obtaining a deep understanding of how actions affect its environment in the long term.

However, creating self-supervised RL models that can solve various goals would still require a massive amount of experience. To address this challenge, Levine proposes offline reinforcement learning, which makes it possible for models to continue learning from previously collected data without the need for continued online experience.

Offline RL can make it possible to apply self-supervised or unsupervised RL methods even in settings where online collection is infeasible, and such methods can serve as one of the most powerful tools for incorporating large and diverse datasets into self-supervised RL, he writes.

The combination of self-supervised and offline RL can help create agents that can create building blocks for learning new tasks and continue learning with little need for new data.

This is very similar to how we learn in the real world. For example, when you want to learn basketball, you use basic skills you learned in the past such as walking, running, jumping, handling objects, etc. You use these capabilities to develop new skills such as dribbling, crossovers, jump shots, free throws, layups, straight and bounce passes, eurosteps, dunks (if youre tall enough), etc. These skills build on each other and help you reach the bigger goal, which is to outscore your opponent. At the same time, you can learn from offline data by reflecting on your past experience and thinking about counterfactuals (e.g., what would have happened if you passed to an open teammate instead of taking a contested shot). You can also learn by processing other data such as videos of yourself and your opponents. In fact, on-court experience is just part of your continuous learning.

Ina paper, Yevgen Chetobar, one of Levines colleagues, shows how self-supervised offline RL can learn policies for fairly general robotic manipulation skills, directly reusing data that they had collected for another project.

This system was able to reach a variety of user-specified goals, and also act as a general-purpose pretraining procedure (a kind of BERT for robotics) for other kinds of tasks specified with conventional reward functions, Levine said.

One of the great benefits of offline and self-supervised RL is learning from real-world data instead of simulated environments.

Basically, it comes down to this question: is it easier to create a brain, or is it easier to create the universe? I think its easier to create a brain, because it is part of the universe, he said.

This is, in fact, one of the great challenges engineers face when creating simulated environments. For example, Levine says, effective simulation for autonomous driving requires simulating other drivers, which requires having an autonomous driving system, which requires simulating other drivers, which requires having an autonomous driving system, etc.

Ultimately, learning from real data will be more effective because it will simply be much easier and more scalable, just as weve seen in supervised learning domains in computer vision and NLP, where no one worries about using simulation, he said. My perspective is that we should figure out how to do RL in a scalable and general-purpose way using real data, and this will spare us from having to expend inordinate amounts of effort building simulators.

See the article here:
Reinforcement learning for the real world - TechTalks

Artificial Intelligence and Sophisticated Machine Learning Techniques are Being Used to Develop Pathogenesi… – Physician’s Weekly

Most scientific areas now use big data analysis to extract knowledge from complicated and massive databases. This method is now utilized in medicine to investigate big groups of individuals. This review helped to understand that the employed artificial intelligence and sophisticated machine learning approaches to investigate physio pathogenesis-based therapy in pSS. The procedure also estimated the evolution of trends in statistical techniques, cohort sizes, and the number of publications throughout this time span. In all, 44,077 abstracts and 1,017 publications were reviewed. The mean number of chosen articles each year was 101.0 (S.D. 19.16), but it climbed dramatically with time (from 74 articles in 2008 to 138 in 2017). Only 12 of them focused on pSS, but none on the topic of pathogenesis-based therapy. A thorough assessment of the literature over the last decade collected all papers reporting on the application of sophisticated statistical analysis in the study of systemic autoimmune disorders (SADs). To accomplish this job, an automatic bibliography screening approach has been devised.To summarize, whereas medicine is gradually entering the era of big data analysis and artificial intelligence, these techniques are not yet being utilized to characterize pSS-specific pathogenesis-based treatment. Nonetheless, big multicenter studies using advanced algorithmic methods on large cohorts of SADs patients are studying this feature.

Reference:www.tandfonline.com/doi/full/10.1080/21645515.2018.1475872

See the original post:
Artificial Intelligence and Sophisticated Machine Learning Techniques are Being Used to Develop Pathogenesi... - Physician's Weekly