Archive for the ‘Artificial Intelligence’ Category

Artificial Intelligence Could Help Scientists Predict Where And When Toxic Algae Will Bloom – WBUR

Climate-driven change in the Gulf of Maine is raising new threats that "red tides" will become more frequent and prolonged. But at the same time, powerful new data collection techniques and artificial intelligence are providing more precise ways to predict where and when toxic algae will bloom. One of those new machine learning prediction models has been developed by a former intern at Bigelow Labs in East Boothbay.

In a busy shed on a Portland wharf, workers for Bangs Island Mussels sort and clean shellfish hauled from Casco Bay that morning. Wholesaler George Parr has come to pay a visit.

"I wholesale to restaurants around town, and if there's a lot of mackerel or scallops, I'll ship into Massachusetts," he says.

Butbusiness grinds to a halt, he says, when blooms of toxic algae suddenly emerge in the bay causing the dreaded red tide.

Toxins can build in filter feeders to levels that would cause "Paralytic Shellfish Poisoning" in human consumers. State regulators shut down shellfish harvests long before danger grows acute. But when a red tide swept into Casco Bay last summer, Bangs Island's harvest was shut down for a full 11 weeks.

So when the restaurants can't get Bangs Island they're like 'Why can't we get Bangs Island?' It was really bad this summer. And nobody was happy."

As Parr notes, businesses of any kind hate unpredictability. And being able to forecast the onset or departure of a red tide has been a challenge although that's changing with the help of a type of artificial intelligence called machine learning.

"We're coming up with forecasts on a weekly basis for each site. For me that's really exciting. That's what machine learning is bringing to the table," says Izzi Grasso, a recent Southern Maine Community College student who is now seeking a mathematics degree at Clarkson University.

Last summer Grasso interned at the Bigelow Laboratory for Ocean Sciences in East Boothbay. That's where she helped to lead a successful project to use cutting-edge "neural network" technology that is modeled on the human brain to better predict toxic algal blooms in the Gulf of Maine.

"Really high accuracy. Right around 95 percent or higher, depending on the way you split it up," she says.

Here's how the project worked: the researchers accessed a massive amount of data on toxic algal blooms from the state Department of Marine Resources. The data sets detailed the emergence and retreat of varied toxins in shellfish samples from up and down the coast over a three-year period.

The researchers trained the neural network to learn from those thousands of data points. Then it created its own algorithms to describe the complex phenomena that can lead up to a red tide.

Then we tested how it would actually predict on unknown data, says Grasso.

Grasso says they fed in data from early 2017 which the network had never seen and asked it to forecast when and where the toxins would emerge.

"I wasn't surprised that it worked, but I was surprised how well it worked, the level of accuracy and the resolution on specific sites and specific weeks," says Nick Record, Bigelow's big data specialist.

Record says that the network's accuracy, particularly in the week before a bloom emerges, could be a game-changer for the shellfish industry and its regulators.

Once it's ready, that is.

"Basically it works so well that I need to break it as many ways as I can before I really trust it."

Still, the work has already been published in a peer-reviewed journal, and it is getting attention from the scientific community. Don Anderson is a senior scientist at the Woods Hole Oceanographic Institution who is working to expand the scope of data-gathering efforts in the Gulf.

"The world is changing with respect to the threat of algal blooms in the Gulf of Maine," he says. "We used to worry about only one toxic species and human poisoning syndrome. Now we have at least three."

Anderson notes, though, that machine-learning networks are only as good as the data that is fed into them. The Bigelow network, for instance, might not be able to account for singular oceanographic events that are short and sudden or that haven't been captured in previous data-sets such as a surge of toxic cells that his instruments detected off Cutler last summer.

"With an instrument moored in the water there, and we in fact got that information, called up the state of Maine and said you've got to be careful, there's a lot of cells moving down there, and they actually had a meeting, they implemented a provisional closure just on the basis of that information, which was ultimately confirmed with toxicity once they measured it," says Anderson.

Anderson says that novel modeling techniques such as Bigelow's, coupled with an expanded number of high-tech monitoring stations, like Woods Hole is pioneering in the Gulf, could make forecasting toxic blooms as simple as checking the weather report.

"That situational awareness is what everyone's striving to produce in the field of monitoring and management of these toxic algal blooms, and it's going to take a variety of tools, and this type of artificial intelligence is a valuable part of that arsenal." Back at the Portland wharf, shellfish dealer George Parr says the research sounds pretty promising.

"Forewarned is fore-armed, Parr says. If they can figure out how to neutralize the red tide, that'd be even better."

Bigelow scientists and former intern Izzi Grasso are working now to look "under the hood" of the neural network, to figure out how, exactly, it arrives at its conclusions. They say that could provide clues about how not only to predict toxic algal blooms,but even how to prevent them.

This story is a production of New England News Collaborative. A version of this story was originallypublishedby Maine Public Radio.

Read more:
Artificial Intelligence Could Help Scientists Predict Where And When Toxic Algae Will Bloom - WBUR

Implementing Artificial Intelligence In Your Business (infographic) – Digital Information World

Learning by doing is a great thing unless its costing you money, then it may not seem worth it. In the business world when new technologies come around it may be tempting to take a wait and see approach to them, watching your competitors successes and failures before taking the time to implement new technologies. Unfortunately when it comes to artificial intelligence (AI), the potential payoff is too big to ignore, and waiting to see how your competitors do implementing this technology could leave your business in the dust.

Taking the right steps toward implementing AI is crucial. Some companies know that they need to hire a data scientist but they dont know what they expect the person to do and they will try to hire someone with no framework or plan in place.

The first step toward integrating artificial intelligence into your business strategy is to take it seriously and make a plan for how it will work. Start with an end goal in mind and work your way back from there. Next, figure out exactly what AI application will help you achieve that goal. With that in mind, start a pilot program with a targeted goal, keeping in mind that it could take a year or more to see any results from such a program.

While going through the growing pains of implementing an enterprise-wide system of AI, it may seem as though this technology is a huge waste of time and money. Learning by doing is a valuable way to understand the ins and outs of any new technology, and even if your companys experiment fails you can still gain valuable insights from failures. These insights can help you find a better focus for your next artificial intelligence experiment.

Currently, fewer than a third of AI pilots progress past the exploratory state to be fully implemented. This does not mean failure, though, it simply means the hypothesis needs to be adjusted before the experiment is altered and started again from the beginning. As with any new technology, there are going to be growing pains and opportunities to learn more.

This can translate to many different outcomes. AI algorithms can be used to reduce waste and improve quality by removing unnecessary variables. It can help meet rising demand and cut variable costs through analysis. It can make adjustments and predict when the market will shift. Some examples from real businesses:

Today, more than 60% of business leaders urgently need to find and implement a strategy for using artificial intelligence in their businesses, but less than half of those actually have a plan. Learn more about implementing AI in your business below.

See original here:
Implementing Artificial Intelligence In Your Business (infographic) - Digital Information World

Using artificial intelligence to speed up cancer detection – University of Leeds

The Secretary of State for Digital, Culture, Media and Sport visited the University today to hear how researchers are being trained to deploy artificial intelligence (AI) in the fight against cancer.

Baroness Nicky Morgan met PhD researchers involved increating the next generation of intelligent technology that will revolutionisehealthcare.

The University is one of 16 centres for doctoral training inAI funded by UKResearch and Innovation, the Government agency responsible forfostering research and development.

The focus of the doctoral training at Leedsis to develop researchers who can apply AIto medical diagnosisand care.

Scientists believe intelligent systems and data analyticswill result in quicker and more accurate diagnosis. Early detection is at theheart of the NHS planto transform cancer survival rates by 2028.

Baroness Morgan said: "Weare committed to being a world leader in artificial intelligence technology andthrough our investment in 16new Centres for Doctoral Training we arehelping train the next generation of researchers.

"It was inspirational to meet some of the leading experts from medicineand computer science working in the new centre at Leeds Universitytoday.They are doing fantastic work to diagnose cancer quicker whichcould save millions of lives."

Baroness Morgan spent time talking to the PhD researchers.

Professor Lisa Roberts, Deputy Vice-Chancellor: Research and Innovation with Baroness Nicky Morgan

Anna Linton is a neuroscientist accepted onto the firstcohort of the programme, which started in the autumn.

She said: The healthcare system can generate a vastquantity of information but sometimes it is assessed in isolation.

I am interested in researching AI systems that can analysemedical notes, the results of pathology tests and scans and identify patternsin that disparate information and make order of it, to give a unified pictureof a patients health status.

That information will help the GP or other healthcareprofessional make a more precise diagnosis.

Dr Emily Clarke is a hospital doctor specialising inhistopathology, the changes in tissue caused by disease. She is an associatemember of the doctoral training programme on a research scholarship from the Medical Research Council.

She wants to develop an AI system to improve the diagnosisof melanoma, a type of skin cancer whose incidence, according to CancerResearch UK, has more than doubled since the early 1990s. It has thefastest rising incidence of any cancer.

Melanoma is detected from the visual examination by ahistopathologist of tissue samples taken during a biopsy. But up to one in sixcases is initially misdiagnosed.

Dr Clarke said: I am hoping we can develop an automatedsystem that can help histopathologists identify melanoma. Diagnosing melanomacan be notoriously difficult so it is hoped that in the future AI may helpbuild a knowledge base of the types of cell changes that are suggestive ofmelanoma and provide a more accurate prediction of a patients prognosis."

Dr Emily Clarke discussing her research project

About 10 researchers will be recruited onto the training programmeeach year. When it is fully up and running, there will be 50 people studyingfor a PhD.

We cant be complacent. We need to ensure there are enough talented and creative people with the skills and knowledge to harness and develop this powerful technology.

Professor Lisa Roberts, Deputy Vice-Chancellor: Researchand Innovation, said: The research at Leeds will ensure the UK remains at theforefront of an important emerging technology that will shape healthcare forfuture generations.

There is little doubt that our researchers will becontribute to future academic and industrial breakthroughs in the field of AI,enabling industry in the UK to remain at the heart of innovation in AI.

David Hogg, Professor of Artificial Intelligence and Director of the Leeds Centre forDoctoral Training, said: The UK is a world leader in AI.

But we cant be complacent. We need to ensure there areenough talented and creative people with the skills and knowledge to harnessand develop this powerful technology.

The PhD researchers will be supervised by leading expertsin computer science and medicine from the University and Leeds TeachingHospitals NHS Trust. To harness thetechnology requires researchers with a strong understanding of medicine,biology and computing and we aim to give that to them.

The researchers joining the Leeds training programme come from a range ofbackgrounds: some are computer scientists and others are biologists orhealthcare professionals but all are able to think computationally and are able to express problems and solutions in a form that can be executed by a computer.

The programme is hosted bythe Leeds Institute for Data Analytics (LIDA), establishedwithUniversityinvestmenttosupport innovation in medical bioinformatics, funded by the MedicalResearch Council, andConsumer Data, funded by the Economic and Social Research Council.

LIDA has now grown to support aportfolio in excess of 45 million of research across the University, bringingtogether over 150 researchers and data scientists. It supports the Universityspartnership withthe Alan Turing Institute, the UKs national institute for data scienceand artificial intelligence.

The University has a strong track record in applyingdigital technologies to healthcare. In partnership with Leeds TeachingHospitals NHS Trust, it is bringing together nine hospitals, seven universitiesand medical technology companies to create a digital pathology network whichwill allow medical staff to collaborate remotely and to conduct AI research. This is known as the Northern Pathology Imaging Co-operative.

Leeds Teaching Hospitals NHS Trust is a leader in usingdigital pathology for cancer diagnosis.

Main photo shows some of the PhD researchers with - front, from left - Professor David Hogg, Director of the Leeds Centre for Doctoral Training, Baroness Nicky Morgan, Secretary of State for Digital, Media, Culture and Sport, and Professor Lisa Roberts, Deputy Vice-Chancellor: Research and Innovation.

More here:
Using artificial intelligence to speed up cancer detection - University of Leeds

Artificial intelligence to study the behavior of Neanderthals – HeritageDaily

- Advertisement -

Abel Mocln, an archaeologist at the Centro Nacional de Investigacin sobre la Evolucin Humana (CENIEH), has led a study which combines Archaeology and Artificial Intelligence, published in the journalArchaeological and Anthropological Sciences, about the Navalmallo Rock Shelter site, situated in the locality of Pinilla de Valle in Madrid, which shows the activity by Neanderthal groups of breaking the bones of medium-sized animals such as deer, for subsequent consumption of the marrow within.

The particular feature of the study lies in its tremendous statistical potential. For the first time, Artificial Intelligence has been used to determine the agent responsible for breaking the bones at an archaeological site, with highly reliable results, which it will be possible to compare with other sites and experiments in the future.

Credit: CENIEH

We have managed to show that statistical tools based on Artificial Intelligence can be applied to studying the breaking of the fossil remains of animals which appear at sites, states Mocln.

In the work, it is not just this activity carried out by the Neanderthals which is emphasized, but also aspects of the methodology developed by the authors of the study. On this point, Mocln insists on the importance of Artificial Intelligence as this is undoubtedly the perfect line of work for the immediate future of Archaeology in general and Taphonomy in particular.

The largest Neanderthal settlement

The Navalmallo Rock Shelter, about 76,000 years old, offers one of the few large windows into Neanderthal behavior within the Iberian Meseta. With its area of over 300 m2, it may well be the largest Neanderthal camp known in the center of the Iberian Peninsula, and it has been possible to reveal different activities conducted by these hominins here, such as hunting large animals, the manufacture of stone tools and the systematic use of fire.

In this study, part of the Valle de los Neandertales project, which includes other locations in the archaeological site complex of Calvero de la Higuera, the collaborating researchers were Rosa Huguet, of the IPHES in Tarragona, Beln Mrquez and Csar Laplana, of the Museo Arqueolgico Regional in Madrid, as well as the three codirectors of the Pinilla del Valle project: Juan Luis Arsuaga, Enrique Baquedano and Alfredo Prez Gonzlez.

CENIEH

Header Image Abrigode Navalmallo Credit: CENIEH

- Advertisement -

See the rest here:
Artificial intelligence to study the behavior of Neanderthals - HeritageDaily

Artificial intelligence to update digital maps and improve GPS navigation – Inceptive Mind

While Google and other technology giants have their own dynamics to keep the most detailed and up-to-date maps possible, it is an expensive and time-consuming process. And in some areas, the data is limited.

To improve this, researchers at MIT and Qatar Computing Research Institute (QCRI) have developed a new machine-learning model based on satellite images that could significantly improve digital maps for GPS navigation. The system, called RoadTagger, recognizes the types of roads and the number of lanes in satellite images, even in spite of trees or buildings that obscure the view. In the future, the system should recognize even more details, such as bike paths and parking spaces.

RoadTagger relies on a novel combination of a convolutional neural network (CNN) and a graph neural network (GNN) to automatically predict the number of lanes and road types (residential or highway) behind obstructions.

Simply put, this model is fed only raw data and automatically produces output without human intervention. Following this dynamic, you can predict, for example, the type of road or if there are several lanes behind a grove, according to the analyzed characteristics of the satellite images.

The researcher team has already tested RoadTagger using real data, covering an area of 688 square kilometers of maps of 20 U.S. cities, and achieved 93% accuracy in the detection of road types and 77% in the number of lanes.

Maintaining this degree of accuracy on digital maps would not only save time and avoid many headaches for drivers but could also prevent accidents. And of course, it would be vital information in case of emergency or disasters.

The researchers now want to further improve the system and also record additional properties, including bike paths, parking bays, and the road surface after all, it makes a difference for drivers whether a former gravel track is now paved somewhere in the hinterland.

Read more from the original source:
Artificial intelligence to update digital maps and improve GPS navigation - Inceptive Mind