Archive for the ‘Artificial Intelligence’ Category

Bidens AI Initiative: Will It Work? – Forbes

AI, Artificial Intelligence concept,3d rendering,conceptual image.

The Biden administration has recently set into action its initiative on AI (Artificial Intelligence).This is part of legislation that was passed last year and included a budget of $250 million (for a period of five years).The goals are to provide easier access to the troves of government data as well as provide for advanced systems to create AI models.

No doubt, this effort is a clear sign of the strategic importance of the technology.It is also a recognition that the U.S. does not want to fall behind other nations, especially China.

The AI task force has 12 distinguished members who are from government, private industry and academia.This diversity should help provide for a smarter approach.

But the focus on data will also be critical. In areas of social importance such as housing, healthcare, education or other social determinants, the government is the only central organizer of data, said Dr. Trishan Panch, who is the co-founder of Wellframe.As such, if AI is going to deliver gains in these areas, the government has to be involved.

Yet there will certainly be challenges.Lets face it, the U.S. government often moves slowly and is burdened with various levels of local, state and federal authorities.

To achieve the initiatives vision, government entities will need to go beyond sharing best practices and figure out how to share more data across departments, said Justin Borgman, who is the CEO of Starburst.For instance, expanding open data initiatives which today are largely siloed by departments, would greatly improve access to data. That would give Artificial Intelligence systems more fuel to do their jobs.

If anything, there will be a need for a different mindset from the government.And this could be a heavy lift.Based on my experience in the public sector, the major challenge for the government is addressing the Missing Middle, said Jon Knisley, who is the Principal of Automation and Process Excellence at FortressIQ. There are a number of very advanced programs on one end, and then there are a lot of emerging programs on the other end. The greatest opportunity lies in closing that gap and driving more adoption. To be successful, there should be a focus as much as possible on applied AI.

But the government initiative can do something that has been difficult for the private sector to achievethat is, to help reskill the workforce for AI.This is perhaps one of the biggest challenges for the U.S.

The question is: How do we create a large AI data science force that is integrated across every industry and department in the US?, said Judy Lenane, who is the Chief Medical Officer at iRhythm.To start, well need to begin AI curriculum early and encourage its growth in order to build a comprehensive workforce. This will be especially critical for industries that are currently behind in technological adoption, such as construction and infrastructure, but it also needs to be accessible.

In the meantime, the Biden AI effort will need to deal with the complex issues of privacy and ethics.

Presently there is significant resistance on this subject given that most consumers feel that their privacy has been compromised, said Alice Jacobs, who is the CEO of convrg.ai.This is the result of a lack of transparency around managing consents and proper safeguards to ensure that data is secure. We will only be able to be successful if we can manage consents in a way where the consumer feels in control of their data.Transparent unified consent management will be the path forward to alleviate resistance around data access and can provide the US a competitive advantage in this data and AI arms race.

Tom (@ttaulli) is an advisor/board member to startups and the author of Artificial Intelligence Basics: A Non-Technical Introduction, The Robotic Process Automation Handbook: A Guide to Implementing RPA Systems and Implementing AI Systems: Transform Your Business in 6 Steps. He also has developed various online courses, such as for the COBOL and Python programming languages.

More here:
Bidens AI Initiative: Will It Work? - Forbes

Use of Artificial Intelligence in the Making of Hearing Aids – Analytics Insight

Applications of artificial intelligence are growing every day in different sectors. There are numerous instances of AI applications in healthcare. The AI that occurs in hearing aids has actually been going on for years and the following is about how it happened. Hearing aids used to be relatively simple, he notes, but when hearing aids introduced a technology known as wide dynamic range compression (WDRC), the devices actually began to make a few decisions based on what is heard. For hearing aids to work effectively, they need to adapt to a persons individual hearing needs as well as all sorts of background noise environments. AI, machine learning, and neural networks are very good techniques to deal with such a complicated, nonlinear, multi-variable problem.

Researchers have been able to accomplish a lot with AI to date when it comes to improving hearing. For instance, researchers at the Perception and Neurodynamics Laboratory (PNL) at the Ohio State University trained a DNN to distinguish speech from other noise (such as humming and other background conversations). DeLiang Wang, professor of computer science and engineering at Ohio State University, in IEEE Spectrum has further explained People with hearing impairment could decipher only 29% of words muddled by babble without the program, but they understood 84% after the processing,

In recent years, major hearing aid manufacturers have been adding AI technology to their premium hearing aid models. For example, Widexs Moment hearing aid utilizes AI and machine learning to create hearing programs based on a wearers typical environments. Recently, Oticon introduced its newest hearing aid device, Oticon More, the first hearing aid with an onboard deep neural network. Oticon More has decided 12 million-plus real-life sounds so that people wearing it can better understand speech and the sounds around them. In a crowded place, Oticon Mores neural net receives a complicated layer of sounds, known as input. The DNN gets to work, first scanning and extracting simple sound elements and patterns from the input. It builds these element-powered her to recognize and make sense of whats happening. Lastly, the hearing aids then make a decision on how to balance the sound scene, making sure the output is clean and ideally balanced to the persons unique type of hearing loss. Speech and other sounds in the environment are complicated acoustic waveforms, but with unique patterns and structures that are exactly the sort of data deep learning is designed to analyze.

Hearing aids range widely in price, and some at the lower end have fewer AI-driven bells and whistles. Some patients may not need all the features, like the people who live alone or rarely leave the house find themselves in crowded scenarios often, for instance, might not benefit from the functionality found in higher-end models.

But for anyone who is out and about a lot, especially in situations where there are big soundscapes, AI-powered features allow for an improved hearing experience. The improvement of memory can be measured in a lot of more natural cater is memory recall. Its not that the hearing aids like Oticon More literally improve a persons memory, but artificial intelligence helps people spend less time trying to make sense of the noise around them, a process known as listening effort. When the listening effort is more natural, a person can focus more on the conversation and all the nuances conveyed within. So, the use of AI in hearing aids would help the brain work in a more natural way.

Share This ArticleDo the sharing thingy

More:
Use of Artificial Intelligence in the Making of Hearing Aids - Analytics Insight

Facebook’s new Artificial Intelligence technology not only identifies Deepfakes, it can also gives hints about their origin – Digital Information…

Artificial intelligence (AI) created videos and pictures have become much popular and that can create some serious problems as well, because you can create fake videos, and manipulated images of any type to put anyone in trouble. Deepfakes use deep learning models to create fictitious photos, videos, and events. These days, deepfakes look so realistic that it becomes very difficult to identify the real picture from the fake one with a normal human eye, therefore, Facebook's AI team has created a model in collaboration with a group of Michigan State University that has the ability to identify not only the fabricated picture or videos, but it can even trace the origin.

The latest technology of Facebook checks the resemblances from a compilation of deepfakes datasets to find out if they have a common basis, looking for a distinctive model such as small specks of noise or minor quirks in the color range of a photo. By spotting the small finger impressions in the photo, the new AI model is capable to distinguish particulars of how the impartial network that produced the photo was invented, such as how large the prototype is and how it was prepared.The experts experimented with the AI technology on the Facebook platform by working on data of about 100,000 fake pictures created by 100 diverse creators making a thousand snaps each. The aim was to use few pictures to make the AI technology competent enough while the rest of the images were detained and then it was shown to the technology as the picture with unidentified inventors and from where they have created. The experts working on this experiment repudiated to show how precise the Artificial intelligences evaluation was during the test, but they have assured that they are trying their best to make the technology even better, which can assist moderators of the platform to detect the corresponding bogus content.

The author of deepfakes wonders how effective the technology will be beyond the environment of the lab, confronting fake pictures on the internet wild. The author further said that fake images that were identified were based on the abstract database and then it was organized in the lab. There is still a chance that creators may make many realistic-looking videos and pictures that can bypass the system. The experts had no other research data so that they can compare their results with them, but they know that they have made this system work in a much better way than before.

Link:
Facebook's new Artificial Intelligence technology not only identifies Deepfakes, it can also gives hints about their origin - Digital Information...

Allianz Global Artificial Intelligence, led by Sebastian Thomas. Accumulates 30% annualized to 3 years. Analysis by Daniel Prez Explica .co – Explica

To invest, it is key to position yourself on the side of growth, innovation and development, and currently the biggest disruptor we have in the world is Artificial Intelligence.

Today I want to talk about the Allianz Global Artificial Intelligence, led by Sebastian Thomas and that invests in this interesting topic. Accumulate 30% annualized to 3 years vs 18% of the index investing in all types of companies that benefit from AI

To talk about the fund, it is necessary to understand the issue, its impact on the economies and the most affected sectors. Here a projection of impact by sectors

They differentiate 3 different big levels to see the AI

AI infrastructure

AI applications

Traditional sectors

Here is a great summary photo of the broad investment spectrum and the different sub-topics

The investment process is divided into three key steps:

Generation of ideas

Fundamental Analysis

Portfolio construction

The distribution by block and the analysis of the impact of AI on the company are key. From there they add those with the greatest potential and manage their exposure

As a summary, we have a fund with a top management team, powerful analysis capabilities and that invests in a subject with high growth projections and impact on the economy

A great option to benefit from the changes that AI causes in the world

Read on at this link:

Read the original:
Allianz Global Artificial Intelligence, led by Sebastian Thomas. Accumulates 30% annualized to 3 years. Analysis by Daniel Prez Explica .co - Explica

Can artificial intelligence predict how sick you’ll get from COVID-19? UC San Diego scientists think so – The San Diego Union-Tribune

A team of San Diego scientists is harnessing artificial intelligence to understand why COVID-19 symptoms can vary dramatically from one person to the next information that could prove useful in the continued fight against the coronavirus and future pandemics.

Researchers pored through publicly available data to see how other viruses alter which genes our cells turn on or off. Using that information, they found a set of genes activated across a wide range of infections, including the novel coronavirus. Those genes predicted whether someone would have a mild or a severe case of COVID-19, and whether they were likely to have a lengthy hospital stay.

A UC San Diego-led team joined by researchers at Scripps Research and the La Jolla Institute for Immunology published the findings June 11. The studys authors say their approach could help determine whether new treatments and vaccines are working.

When the whole world faced this pandemic, it took several months for people to scramble to understand the new virus, said Dr. Pradipta Ghosh, a UCSD cell biologist and one of the studys authors. I think we need more of this computational framework to guide us in panic states like this.

The project began in March 2020, when Ghosh teamed up with UCSD computer scientist Debashis Sahoo to better understand why the novel coronavirus was causing little to no symptoms in some people while wreaking havoc on others.

There was just one problem: The novel coronavirus was, well, novel, meaning there wasnt much data to learn from.

So Sahoo and Ghosh took a different tack. They went to public databases and downloaded 45,000 samples from a wide array of viral infections, including Ebola, Zika, influenza, HIV, and hepatitis C virus, among others.

Their hope was to find a shared response pattern to these viruses, and thats exactly what they saw: 166 genes that were consistently cranked up during infection. Among that list, 20 genes generally separated patients with mild symptoms from those who became severely ill.

The coronavirus was no exception. Sahoo and Ghosh say they identified this common viral response pattern well before testing it in samples from COVID-19 patients and infected cells, yet the results held up surprisingly well.

It seemed to work in every data set we used, Sahoo said. It was hard to believe.

They say their findings show that respiratory failure in COVID-19 patients is the result of overwhelming inflammation that damages the airways and, over time, makes immune cells less effective.

Stanfords Purvesh Khatri isnt surprised. His lab routinely uses computer algorithms and statistics to find patterns in large sets of immune response data. In 2015, Khatris group found that respiratory viruses trigger a common response. And in April, they reported that this shared response applied to a range of other viruses, too, including the novel coronavirus.

That makes sense, Khatri says, because researchers have long known there are certain genes the immune system turns on in response to virtually any viral infection.

Overall, the idea is pretty solid, said Khatri of the recent UCSD-led study. The genes are all (the) usual suspects.

Sahoo and Ghosh continue to test their findings in new coronavirus data as it becomes available. Theyre particularly interested in COVID-19 long-haulers. Ghosh says theyre already seeing that people with prolonged coronavirus symptoms have distinct gene activation patterns compared to those whove fully recovered. Think of it like a smoldering fire that wont die out.

The researchers ultimate hope isnt just to predict and understand severe disease, but to stop it. For example, they say, a doctor could give a patient a different therapy if a blood sample suggests theyre likely to get sicker with their current treatment. Ghosh adds that the gene pattern theyre seeing could help identify promising new treatments and vaccines against future pandemics based on which therapies prevent responses linked to severe disease.

In unknown, uncharted territory, this provides guard rails for us to start looking around, understand (the virus), find solutions, build better models and, finally, find therapeutics.

See original here:
Can artificial intelligence predict how sick you'll get from COVID-19? UC San Diego scientists think so - The San Diego Union-Tribune