Archive for the ‘Artificial Intelligence’ Category

This Harvard Professor And His Students Have Raised $14 Million To Make AI Too Smart To Be Fooled By Hackers – Forbes

By adding a few pixels (highlighted in red) to a legitimate check, fraudsters can trick artificial intelligence models into mistaking a $401 check for one worth $701. Undetected, the exploit could lead to large-scale financial fraud.

Yaron Singer climbed the tenure track ladder to a full professorship at Harvard in seven years, fueled by his work on adversarial machine learning, a way to fool artificial intelligence models using misleading data. Now, Singers startup, Robust Intelligence, which he formed with a former Ph.D. advisee and two former students, is emerging from stealth to take his research to market.

This year, artificial intelligence is set to account for $50 billion in corporate spending, though companies are still figuring out how to implement the technology into their business processes. Companies are still figuring out, too, how to protect their good AI from bad AI, like an algorithmically generated voice deepfake that can spoof voice authentication systems.

In the early days of the internet, it was designed like everybodys a good actor. Then people started to build firewalls because they discovered that not everybody was, says Bill Coughran, former senior vice president of engineering at Google. Were seeing signs of the same thing happening with these machine learning systems. Where theres money, bad actors tend to come in.

Enter Robust Intelligence, a new startup led by CEO Singer with a platform that the company says is trained to detect more than 100 types of adversarial attacks. Though its founders and most of the team hold a Cambridge pedigree, the startup has established headquarters in San Francisco and announced Wednesday that it had raised $14 million in a seed and Series A round led by Sequoia. Coughran, now a partner at the venture firm, is the lead investor on the fundraise, which also comes with participation from Engineering Capital and Harpoon Ventures.

Robust Intelligence CEO Yaron Singer is taking a leave from Harvard, where he is a professor of computer science and applied mathematics.

Singer followed his Ph.D. in computer science from the University of California at Berkeley, by joining Google as a postdoctoral researcher in 2011. He spent two years working on algorithms and machine-learning models to make the tech giants products run faster, and saw how easily AI could go off the rails with bad data.

Once you start seeing these vulnerabilities, it gets really, really scary, especially if we think about how much we want to use artificial intelligence to automate our decisions, he says.

Fraudsters and other bad actors can exploit the relative inflexibility of artificial intelligence models in processing unfamiliar data. For example, Singer says, a check for $401 can be manipulated by adding a few pixels that are imperceptible to the human eye yet cause the AI model to read the check erroneously as $701. If fraudsters get their hands on checks, they can hack into these apps and start doing this at scale, Singer says. Similar modifications to data inputs can lead to fraudulent financial transactions, as well as spoofed voice or facial recognition.

In 2013, upon taking an assistant professor position at Harvard, Singer decided to focus his research on devising mechanisms to secure AI models. Robust Intelligence comes from nearly a decade in the lab for Singer, during which time he worked with three Harvard pupils who would become his cofounders: Eric Balkanski, a Ph.D. student advised by Singer; Alexander Rilee, a graduate student; and undergraduate Kojin Oshiba, who coauthored academic papers with the professor. Across 25 papers, Singers team broke ground on designing algorithms to detect misleading or fraudulent data, and helped bring the issue to government attention, even receiving an early Darpa grant to conduct its research. Rilee and Oshiba remain involved with the day-to-day activities at Robust, the former on government and go-to-market, and the latter on security, technology and product development.

Robust Intelligence is launching with two products, an AI firewall and a red team offering, in which Robust functions like an adversarial attacker. The firewall works by wrapping around an organizations existing AI model to scan for contaminated data via Robusts algorithms. The other product, called Rime (or Robust Intelligence Machine Engine), performs a stress test on a customers AI model by inputting basic mistakes and deliberately launching adversarial attacks on the model to see how it holds up.

The startup is currently working with about ten customers, says Singer, including a major financial institution and a leading payment processor, though Robust will not name any names due to confidentiality. Launching out of stealth, Singer hopes to gain more customers as well as double the size of the team, which currently stands at 15 employees. Singer, who is on leave from Harvard, is sheepish about his future in academia, but says he is focused on his CEO role in San Francisco at the moment.

For me, Ive climbed the mountain of tenure at Harvard, but now I think weve found an even higher mountain, and that mountain is securing artificial intelligence, he says.

Continued here:
This Harvard Professor And His Students Have Raised $14 Million To Make AI Too Smart To Be Fooled By Hackers - Forbes

Defense Official Calls Artificial Intelligence the New Oil – Department of Defense

Artificial intelligence is the new oil, and the governments or the countries that get the best datasets will unquestionably develop the best AI, the Joint Artificial Intelligence Center's chief technologyofficer said Oct. 15.

Speaking on a panel about AI superpowers at the Politico AI Summit, Nand Mulchandani said AI is a very large technology and industry. "It's not a single, monolithic technology," he said. "It's a collection of algorithms, technologies, etc., all cobbled together to call AI."

The United States has access to global datasets, and that's why global partnerships are so incredibly important, he said, noting the Defense Department launched the AI partnership for defense at the JAIC recently to have access to global datasets with partners, which gives DOD a natural advantage in building these systems at scale.

"Industry has to develop on its own, and that's where the global talent is; that's where the money is; that's where all of the innovation is going on," Mulchandani noted, adding that the U.S. government's job is to be able to work in the best way and absorb the best technology that it can. That includes working hand in glove with industry on a voluntary basis, he said. He said there are certain areas of AI that are highly scaled that you can trust and deploy at scale.

"But notice many or not many of those systems have been deployed on weapon systems. We actually don't have any of them deployed," he said.

Mulchandani said the reason is that explainability, testing, trust and ethics are all highly connected pieces and even AI security when it comes to model security, data security being able to penetrate and break models. This is all very early, which is why the DOD and the U.S. government widely have taken a very stringent approach to putting together the ethics principles and frameworks within which we're going to operate.

"[Earlier this year, one of the first international visits that we made were to NATO and our European partners, and [we] then pulled them into this AI partnership for defense that I just talked about," he said. "Thirteen different countries are getting together to actually build these principles because we actually do need to build a lot of confidence in this."

He said DOD continues to attract and have the best talent at JAIC. "The real tricky part is: How do we actually take that technology and get it deployed? That's the complexity of integrating AI into existing systems, because one isn't going to throw away the entire investment of legacy systems that one has, whether it be software or hardware or even military hardware," Mulchandani said. "[How] can we absorb the best of what's coming and get it integrated into the system as where the complexity is?"

DOD has had a long history of companies that know how to do that, and harnessing it is the actual work and the piece that we're worried about the most and really are focused on the most, he added.

A global workforce the DOD technology companies are global companies, he emphasized. "These are not linked to a particular geographic region. We hire. We bring the best talent in, wherever it may be, [and we have] research and development arms all over the world."

DOD has special security needs and requirements that must be taken care of when it comes to data, and the JAIC is putting in place very different development processes now to handle AI development, he said. "So, the dynamics of the way software gets built [and] the dynamics of who builds it are changing in a very significant way," Mulchandani said. "But the global war for talent is a real one, which is why we are not actually focused on trying to corner the market on talent."

He said they are trying to build leverage by building relationships with the leading AI companies to harness the innovation.

See original here:
Defense Official Calls Artificial Intelligence the New Oil - Department of Defense

FDA highlights the need to address bias in AI – Healthcare IT News

The U.S. Food and Drug Administration on Thursday convened a public meeting of its Patient Engagement Advisory Committee to discuss issues regarding artificial intelligence and machine learning in medical devices.

"Devices using AI and ML technology will transform healthcare delivery by increasing efficiency in key processes in the treatment of patients," said Dr. Paul Conway, PEAC chair and chair of policy and global affairs of the American Association of Kidney Patients.

As Conway and others noted during the panel, AI and ML systems may have algorithmic biases and lack transparency potentially leading, in turn, to an undermining of patient trust in devices.

Medical device innovation has already ramped up in response to the COVID-19 crisis, with Center for Devices and Radiological Health Director Dr. Jeff Shuren noting that 562 medical devices have already been granted emergency use authorization by the FDA.

It's imperative, said Shuren, that patients' needs be considered as part of the creation process.

"We continue to encourage all members of the healthcare ecosystem to strive to understand patients' perspective and proactively incorporate them into medical device development, modification and evaluation," said Shuren. "Patients are truly the inspiration for all the work we do."

"Despite the global challenges with the COVID-19 public health emergency ... the patient's voice won't be stopped," Shuren added. "And if anything, there is even more reason for it to be heard."

However, said Pat Baird, regulatory head of global software standards at Philips, facilitating patient trust also means acknowledging the importance of robust and accurate data sets.

"To help support ourpatients, we need to become more familiar with them, their medical conditions, their environment, and their needs and wantsto be able to better understand the potentially confounding factors that drive some of the trends in the collected data," said Baird.

"An algorithm trained on one subset of the population might not be relevant for a different subset," Baird explained.

For instance, if a hospital needed a device that would serve its population of seniors at a Florida retirement community, an algorithm trained on recognizing healthcare needs of teens in Maine would not be effective.Not every population will have the same needs.

"This bias in the data is not intentional, but can be hard to identify," he continued. He encouraged the development of a taxonomy of bias types that would be made publicly available.

Ultimately, he said, people won't use what they don't trust. "We need to use our collective intelligence to help produce better artificial intelligence populations," he said.

Captain Terri Cornelison, chief medical officer and director for thehealth of women at CDRH, noted that demographic identifiers can be medically significant due to genetics and social determinants of health, among other factors.

"Science is showing us that these are not just categorical identifiers but actually clinically relevant," Cornelison said.

She pointed out that a clinical study that does not identify patients' sex may mask different results for people with different chromosomes.

"In many instances, AI and ML devices may be learning a worldview that is narrow in focus, particularly in the available training data, if the available training data do not represent a diverse set of patients," she said.

"More simply, AI and ML algorithms may not represent you if the data do not include you," she said.

"Advances in artificial intelligence are transforming our health systems and daily lives," Cornelison continued. "Yet despite these significant achievements, most ignore the sex, gender, age, race [and] ethnicity dimensions and their contributions to health and disease differences among individuals."

The committee also examined how informed consent might play a role in algorithmic training.

"If I give my consent to be treated by an AI/ML device, I have the right to know whether there were patients like me ... in the data set," said Bennet Dunlap, a health communications consultant. "I think the FDA should not be accepting or approving a medical device that does not have patient engagement" of the kind outlined in committee meetings, he continued.

"You need to know what your data is going to be used for," he reiterated. "I have white privilege. I can just assume old white guys are in [the data sets]. That's where everybody starts. But that should not be the case."

Dr. Monica Parker, assistant professor in neurology and education core member of the Goizueta Alzheimers Disease Research Center at Emory University, pointed out that diversifying patient data requires turning to trusted entities within communities.

"If people are developing these devices, in the interest of being more broadly diverse, is there some question about where these things were tested?" She raised the issue of testing taking place in academic medical centers or technology centers on the East or West Coast, versus "real-world data collection from hospitals that may be using some variation of the device for disease process.

"Clinicians who are serving the population for which the device is needed" provide accountability and give the device developer a better sense of whothey're treating, Parker said. She also reminded fellow committee members that members of different demographic groups are not uniform.

Philip Rutherford, director of operation at Faces and Voices Recovery, pointed out that it's not just enough to prioritize diversity in data sets.The people in charge of training the algorithm must also not be homogenous.

"If we want diversity in our data, we have to seek diversity in the people that are collecting the data," said Rutherford.

The committee called on the FDA to take a strong role in addressing algorithmic bias in artificial intelligence and machine learning.

"At the end of the day, diversity validation and unconscious bias all these things can be addressed if there's strong leadership from the start," said Conway.

Kat Jercich is senior editor of Healthcare IT News.Twitter: @kjercichEmail: kjercich@himss.orgHealthcare IT News is a HIMSS Media publication.

Read the original here:
FDA highlights the need to address bias in AI - Healthcare IT News

Artificial intelligence reveals hundreds of millions of trees in the Sahara – Newswise

Newswise If you think that the Sahara is covered only by golden dunes and scorched rocks, you aren't alone. Perhaps it's time to shelve that notion. In an area of West Africa 30 times larger than Denmark, an international team, led by University of Copenhagen and NASA researchers, has counted over 1.8 billion trees and shrubs. The 1.3 million km2 area covers the western-most portion of the Sahara Desert, the Sahel and what are known as sub-humid zones of West Africa.

"We were very surprised to see that quite a few trees actually grow in the Sahara Desert, because up until now, most people thought that virtually none existed. We counted hundreds of millions of trees in the desert alone. Doing so wouldn't have been possible without this technology. Indeed, I think it marks the beginning of a new scientific era," asserts Assistant Professor Martin Brandt of the University of Copenhagen's Department of Geosciences and Natural Resource Management, lead author of the study'sscientific article, now published inNature.

The work was achieved through a combination of detailed satellite imagery provided by NASA, and deep learning -- an advanced artificial intelligence method. Normal satellite imagery is unable to identify individual trees, they remain literally invisible. Moreover, a limited interest in counting trees outside of forested areas led to the prevailing view that there were almost no trees in this particular region. This is the first time that trees across a large dryland region have been counted.

The role of trees in the global carbon budget

New knowledge about trees in dryland areas like this is important for several reasons, according to Martin Brandt. For example, they represent an unknown factor when it comes to the global carbon budget:

"Trees outside of forested areas are usually not included in climate models, and we know very little about their carbon stocks. They are basically a white spot on maps and an unknown component in the global carbon cycle," explains Martin Brandt.

Furthermore, the new study can contribute to better understanding the importance of trees for biodiversity and ecosystems and for the people living in these areas. In particular, enhanced knowledge about trees is also important for developing programmes that promote agroforestry, which plays a major environmental and socio-economic role in arid regions.

"Thus, we are also interested in using satellites to determine tree species, as tree types are significant in relation to their value to local populations who use wood resources as part of their livelihoods. Trees and their fruit are consumed by both livestock and humans, and when preserved in the fields, trees have a positive effect on crop yields because they improve the balance of water and nutrients," explains Professor Rasmus Fensholt of the Department of Geosciences and Natural Resource Management.

Technology with a high potential

The research was conducted in collaboration with the University of Copenhagen's Department of Computer Science, where researchers developed the deep learning algorithm that made the counting of trees over such a large area possible.

The researchers show the deep learning model what a tree looks like: They do so by feeding it thousands of images of various trees. Based upon the recognition of tree shapes, the model can then automatically identify and map trees over large areas and thousands of images. The model needs only hours what would take thousands of humans several years to achieve.

"This technology has enormous potential when it comes to documenting changes on a global scale and ultimately, in contributing towards global climate goals. We are motivated to develop this type of beneficial artificial intelligence," says professor and co-author Christian Igel of the Department of Computer Science.

The next step is to expand the count to a much larger area in Africa. And in the longer term, the aim is to create a global database of all trees growing outside forest areas.

###

FACTS:

Read the rest here:
Artificial intelligence reveals hundreds of millions of trees in the Sahara - Newswise

Artificial intelligence and the antitrust case against Google – VentureBeat

Following the launch of investigations last year, the U.S. Department of Justice (DOJ) together with attorney generals from 11 U.S. states filed a lawsuit against Google on Tuesday alleging that the company maintains monopolies in online search and advertising, and violates laws prohibiting anticompetitive business practices.

Its the first antitrust lawsuit federal prosecutors filed against a tech company since the Department of Justice brought charges against Microsoft in the 1990s.

Back then, Google claimed Microsofts practices were anticompetitive, and yet, now, Google deploys the same playbook to sustain its own monopolies, the complaint reads. For the sake of American consumers, advertisers, and all companies now reliant on the internet economy, the time has come to stop Googles anticompetitive conduct and restore competition.

Attorneys general from no Democratic states joined the suit. State attorneys general Democrats and Republicans alike plan to continue on with their own investigations, signaling that more charges or backing from states might be on the way. Both the antitrust investigation completed by a congressional subcommittee earlier this month and the new DOJ lawsuit advocate breaking up tech companies as a potential solution.

The64-page complaint characterizes Google as a monopoly gatekeeper for the internet and spells out the reasoning behind the lawsuit in detail, documenting the companys beginning at Stanford University in the 1990s alongside deals made in the past decade with companies like Apple and Samsung to maintain Googles dominance. Also key to Googles power and plans for the future is access to personal data and artificial intelligence. In this story, we take a look at the myriad of ways in which artificial intelligence plays a role in the antitrust case against Google.

The best place to begin when examining the role AI plays in Googles antitrust case is online search, which is powered by algorithms and automated web crawlers that scour webpages for information. Personalized search results made possible by the collection of personal data started in 2009, and today Google can search for images, videos, and even songs that people hum. Google dominates the $40 billion online search industry, and that dominance acts like a self-reinforcing cycle: More data leads to more training data for algorithms, defense against competition, and more effective advertising.

General search services, search advertising, and general search text advertising require complex algorithms that are constantly learning which organic results and ads best respond to user queries; the volume, variety, and velocity of data accelerates the automated learning of search and search advertising algorithms, the complaint reads. The additional data from scale allows improved automated learning for algorithms to deliver more relevant results, particularly on fresh queries (queries seeking recent information), location-based queries (queries asking about something in the searchers vicinity), and long-tail queries (queries used infrequently).

Search is now primarily conducted on mobile devices like smartphones or tablets. To build monopolies in mobile search and create scale insurmountable to competitors, the complaint states, Google turned to exclusionary agreements with smartphone sellers like Apple and Samsung as well as revenue sharing with wireless carriers. The Apple-Google symbiosis is in fact so important that losing it is referred to as code red at Google, according to the DOJ filing. An unnamed senior Apple employee corresponding with their counterpart at Google said its Apples vision that the two companies operate as if one company. Today, Google accounts for four out of five web searches in the United States and 95% of mobile searches. Last year, Google estimated that nearly half of all search traffic originated on Apple devices, while 15-20% of Apple income came from Google.

Exclusive agreements that put Google apps on mobile devices effectively captured hundreds of millions of users. An antitrust report referenced these data advantages, stating that Googles anticompetitive conduct effectively eliminates rivals ability to build the scale necessary to compete.

In addition to the DOJ report, the antitrust report Congress released earlier this month frequently cites the network effect achieved by Big Tech companies as a significant barrier to entry for smaller businesses or startups. The incumbents have access to large data sets that give them a big advantage, especially when combined with machine learning and AI, the report reads. Companies with superior access to data can use that data to better target users or improve product quality, drawing more users and, in turn, generating more data an advantageous feedback loop.

Network effects often come up in the congressional report in reference to mobile operating systems, public cloud providers, and AI assistants like Alexa and Google Assistant, which improve their machine learning models through the collection of data like voice recordings.

One potential solution the congressional investigation suggested is better data portability to help small businesses compete with tech giants.

One part of maintaining Googles search monopoly, according to the congressional report, is control of emerging search access points. While Google searches began on desktop computers, mobile is king today, and fast emerging are devices like smartwatches, smart speakers, and IoT devices with AI assistants like Alexa, Google Assistant, and Siri. Virtual assistants are using AI to turn speech into text and predict a users intent, becoming a new battleground. An internal Google document declared voice will become the future of search.

The growth of searches via Amazon Echo devices is why a Morgan Stanley analyst previously suggested Google give everyone in the country a free speaker. In the end, he concluded, it would be cheaper for Google to give away hundreds of millions of speakers than to lose its edge to Amazon.

The scale afforded by Android and native Google apps also appears to be a key part of Google Assistants ability to understand or translate dozens of languages and collect voice data across the globe.

Search is primarily done on mobile devices today. Thats what drives the symbiotic relationship between Apple and Google, where Apple receives 20% of its total revenue from Google in exchange for making Google the de facto search engine on iOS phones, which still make up about 60% of the U.S. smartphone market.

The DOJ suit states that Google is concentrating on Google Nest IoT devices and smart speakers because internet searches will increasingly take place using voice orders. The company wants to control the next popular environment for search queries, the DOJ says, whether it be wearable devices like smartwatches or activity monitors from Fitbit, which Google announced plans to acquire roughly one year ago.

Google recognizes that its hardware products also have HUGE defensive value in virtual assistant space AND combatting query erosion in core Search business. Looking ahead to the future of search, Google sees that Alexa and others may increasingly be a substitute for Search and browsers with additional sophistication and push into screen devices,' the DOJ report reads. Google has also harmed competition by raising rivals costs and foreclosing them from effective distribution channels, such as distribution through voice assistant providers, preventing them from meaningfully challenging Googles monopoly in general search services.

In other words, only Google Assistant can get microphone access for a smartphone to respond to a wake word like Hey, Google, a tactic the complaint says handicaps rivals.

AI like Google Assistant also features prominently in the antitrust report a Democrat-led antitrust subcommittee in Congress released, which refers to AI assistants as efforts to lock consumers into information ecosystems. The easiest way to spot this lock-in is when you consider the fact that Google prioritizes YouTube, Apple wants you to use Apple Music, and Amazon wants users to subscribe to Amazon Prime Music.

The congressional report also documents the recent history of Big Tech companies acquiring startups. It alleges that in order to avoid competition from up-and-coming rivals, companies like Google have bought up startups in emerging fields like artificial intelligence and augmented reality.

If you expect a quick ruling by the DC Circuit Court in the antitrust lawsuit against Google, youll be disappointed that doesnt seem at all likely. Taking the 1970s case against IBM and the Microsoft suit in the 1990s as a guide, antitrust cases tend to take years. In fact, its not outside the realm of possibility that this case could still be happening the next time voters pick a president in 2024.

What does seem clear from language used in both US v Google and the congressional antitrust report is that both Democrats and Republicans are willing to consider separating company divisions in order to maintain competitive markets and a healthy digital economy. Whats also clear is that both the Justice Department and antitrust lawmakers in Congress see action as necessary based in part on how Google treats personal data and artificial intelligence.

Read this article:
Artificial intelligence and the antitrust case against Google - VentureBeat