Archive for the ‘Artificial Intelligence’ Category

This is what happens when artificial intelligence meets emotional intelligence – The Hindu

(Subscribe to our Today's Cache newsletter for a quick snapshot of top 5 tech stories. Click here to subscribe for free.)

Advances in artificial intelligence (AI) over the years has become foundational technology in autonomous vehicles and security systems. Now, a team of researchers at the University of Stanford are teaching computers to recognise not just what objects are in an image, but also how those images make people feel.

The team has trained an algorithm to recognise emotional intent behind great works of art like Vincent Van Goghs Starry Night and James Whistlers Whistlers Mother.

The ability will be key to making AI not just more intelligent, but more human, a researcher said in the study titled ArtEmis: Affective Language for Visual Art.

Also Read | Artificial Intelligence knows when you feel lonely

The team built a database of 81,000 WikiArt paintings and over 4 lakh written responses from 6,500 humans indicating how they felt about a painting. This included their reason for choosing a particular emotion. The team used the responses to train AI to generate emotional responses to visual art and justify those emotions in language.

The algorithm dissected the artists work into one of eight emotional categories including awe, amusement, sadness and fear. It then explained in written text what it is in the image that justifies the emotion.

Also Read | AI finds Bollywoods association of beauty with fair skin unchanged

The model is said to interpret any form of art, including still life, portraits and abstraction. It also takes into account the subjectivity of art, meaning that not everyone feels the same way about a piece of work, the team noted.

The tool can be used by artists, especially graphic designers, to evaluate if their work is having the desired impact.

Go here to read the rest:
This is what happens when artificial intelligence meets emotional intelligence - The Hindu

Google Maps using artificial intelligence to help point people in the right direction – ZDNet

Boasting that it is on track to bring over 100 "AI-powered" improvements to Google Maps, Google has announced a series of updates that have been or are set to be released in the coming year.

The first is adding Live View, a feature that uses augmented reality cues -- arrows and accompanying directions -- to help point people in the right way and avoid the "awkward moment when you're walking the opposite direction of where you want to go".

According to Google Maps product VP Dane Glasgow, Live View relies on AI technology, known as global localisation, to scan "tens of billions" of Street View images to help understand a person's orientation, as well as the precise altitude and placement of an object inside a building, such as an airport, transit station, or shopping centre, before providing directions.

"If you're catching a plane or train, Live View can help you find the nearest elevator and escalators, your gate, platform, baggage claim, check-in counters, ticket office, restrooms, ATMs and more. And if you need to pick something up from the mall, use Live View to see what floor a store is on and how to get there so you can get in and out in a snap," Glasgow explained in a post.

For now, the indoor Live View feature is available on Android and iOS in a number of shopping centres in the US across Chicago, Long Island, Los Angeles, Newark, San Francisco, San Jose, and Seattle, with plans to expand it to a select number of airports, shopping centres, and transit stations in Tokyo and Zurich. More cities will also be added, Glasgow confirmed.

See also:Google Maps turns 15: A look back on where it all began

Glasgow added commuters will be able to view the current and forecast temperature and weather conditions, as well as the air quality in an area through Google Maps, made possible through data shared by Google partners such as The Weather Company, AirNow.gov, and the Central Pollution Board. To be available on Android and iOS, the weather layer will be made available globally, while the air quality layer will launch in Australia, the US, and India, with plans to see it expanded in other countries.

On the environment, Glasgow also noted that Google is building a new routing model using insights from the US Department of Energy's National Renewable Energy Lab to help deliver more eco-friendly route options, based on factors like road incline and traffic congestion, for commuters in the US on Android and iOS. The model will be available later this year, with plans for global expansion at an unspecified later date.

Glasgow said the move is part of the company's commitment to reduce its environmental footprint.

"Soon, Google Maps will default to the route with the lowest carbon footprint when it has approximately the same ETA as the fastest route. In cases where the eco-friendly route could significantly increase your ETA, we'll let you compare the relative CO2 impact between routes so you can choose," he said.

In further efforts to meet its sustainability commitment, the tech giant also plans to introduce in "coming months" an updated version of Maps where commuters will have a view of all routes and transportation modes available to their destination, without toggling between tabs, while also automatically prioritising a user's preferred transport mode or modes that are popular in their city.

"For example, if you bike a lot, we'll automatically show you more biking routes. And if you live in a city like New York, London, Tokyo, or Buenos Aires where taking the subway is popular, we'll rank that mode higher," Glasgow said.

Also, within Maps, Google said it is teaming up with US supermarket Fred Meyer to pilot in select stores in Portland, Oregon a feature that has been designed to make contactless grocery pickup easier, including notifying commuters what time to leave to pick up their groceries, share the arrival time with the store, and allow customers to "check-in" on the Google Maps app so their grocery orders can be brought out to their car on arrival.

Read the original:
Google Maps using artificial intelligence to help point people in the right direction - ZDNet

Heres why UF is going to use artificial intelligence across its entire curriculum | Column – Tampa Bay Times

Henry Ford did not invent the automobile. That was Karl Benz.

But Ford did perfect the assembly line for auto production. That innovation directly led to cars becoming markedly cheaper, putting them within reach of millions of Americans.

In effect, Ford democratized the automobile, and I see a direct analogy to what the University of Florida is doing for artificial intelligence AI, for short.

In July, the University of Florida announced a $100 million public-private partnership with NVIDIA the maker of graphics processing units used in computers that will catapult UFs research strength to address some of the worlds most formidable challenges, create unprecedented access to AI training and tools for under-represented communities and build momentum for transforming the future of the workforce.

At the heart of this effort is HiPerGator AI the most powerful AI supercomputer in higher education. The supercomputer, as well as related tools, training and other resources, is made possible by a donation from UF alumnus Chris Malachowsky as well as from NVIDIA, the Silicon Valley-based technology company he co-founded and a world leader in AI and accelerated computing. State support also plays a critical role, particularly as UF looks to add 100 AI-focused faculty members to the 500 new faculty recently added across the university many of whom will weave AI into their teaching and research.

UF will likely be the nations first comprehensive research institution to integrate AI across the curriculum and make it a ubiquitous part of its academic enterprise. It will offer certificates and degree programs in AI and data science, with curriculum modules for specific technical and industry-focused domains. The result? Thousands of students per year will graduate with AI skills, growing the AI-trained workforce in Florida and serving as a national model for institutions across the country. Ultimately, UFs effort will help to address the important national problem of how to train the nations 21st-century workforce at scale.

Further, due to the unparalleled capabilities of our new machine, researchers will now have the tools to solve applied problems previously out of reach. Already, researchers are eyeing how to identify at-risk students even if they are learning remotely, how to bend the medical cost curve to a sustainable level, and how to solve the problems facing Floridas coastal communities and fresh water supply.

Additionally, UF recently announced it would make its supercomputer available to the entire State University System for educational and research purposes, further bolstering research and workforce training opportunities and positioning Florida to be a national leader in a field revolutionizing the way we all work and live. Soon, we plan to offer access to the machine even more broadly, boosting the national competitiveness of the United States by partnering with educational institutions and private industry around the country.

Innovation, access, economic impact, world-changing technological advancement UFs AI initiative provides all these things and more.

If Henry Ford were alive today, I believe he would recognize the importance of whats happening at UF. And while he did not graduate from college, I believe he would be proud to see it happening at an American public university.

Joe Glover is provost and senior vice president of academic affairs at the University of Florida.

See the original post:
Heres why UF is going to use artificial intelligence across its entire curriculum | Column - Tampa Bay Times

Study Finds Both Opportunities and Challenges for the Use of Artificial Intelligence in Border Management Homeland Security Today – HSToday

Frontex, the European Border and Coast Guard Agency, commissioned RAND Europe to carry out an Artificial intelligence (AI) research study to provide an overview of the main opportunities, challenges and requirements for the adoption of AI-based capabilities in border management.

AI offers several opportunities to the European Border and Coast Guard, including increased efficiency and improving the ability of border security agencies to adapt to a fast-paced geopolitical and security environment. However, various technological and non-technological barriers might influence how AI materializes in the performance of border security functions.

Some of the analyzed technologies included automated border control, object recognition to detect suspicious vehicles or cargo and the use of geospatial data analytics for operational awareness and threat detection.

The findings from the study have now been made public, and Frontex aims to use the data gleaned to shape the future landscape of AI-based capabilities for Integrated Border Management, including AI-related research and innovation projects.

The study identified a wide range of current and potential future uses of AI in relation to five key border security functions, namely: situation awareness and assessment; information management; communication; detection, identification and authentication; and training and exercise.

According to the report, AI is generally believed to bring at least an incremental improvement to the existing ways in which border security functions are conducted. This includes front-end capabilities that end users directly utilize, such as surveillance systems, as well as back-end capabilities that enable border security functions, like automated machine learning.

Potential barriers to AI adoption include knowledge and skills gaps, organizational and cultural issues, and a current lack of conclusive evidence from actual real-life scenarios.

Read the full report at Frontex

(Visited 174 times, 3 visits today)

See the article here:
Study Finds Both Opportunities and Challenges for the Use of Artificial Intelligence in Border Management Homeland Security Today - HSToday

How To Patent An Artificial Intelligence (AI) Invention: Guidance From The US Patent Office (USPTO) – Intellectual Property – United States – Mondaq…

PatentNext Summary: AI-related inventionshave experienced explosive growth. In view of this, the USPTO hasprovided guidance in the form of an example claim and an"informative" PTAB decision directed to AI-related claimsthat practitioners can use to aid in preparing robust patent claimson AI-related inventions.

Artificial Intelligence (AI) has experienced explosive growthacross various industries. From Apple's Face ID (facerecognition), Amazon's Alexa (voice recognition), to GM Cruise(autonomous vehicles), AI continues to shape the modern world.SeeArtificialIntelligence.

It comes as no surprise, therefore, that patents related toAI inventions have also experienced explosivegrowth.

Indeed, in the last quarter of 2020, the United States Patentand Trademark Office (USPTO) reported that patent filings forArtificial Intelligence (AI) related inventions more than doubledfrom 2002 to 2018.SeeOffice of the ChiefEconomist, Inventing AI: Tracking The Diffusion Of ArtificialIntelligence With Patents, IP DATA HIGHLIGHTS No. 5 (Oct.2020).

During the same period, however, the U.S. Supreme Court'sdecision inAlice Corp. v. CLS BankInternationalcast doubt on the patentability ofsoftware-related inventions, which AI sits squarelywithin.

Fortunately, since the SupremeCourt'sAlice decision, the Federal Circuitclarified (on numerous occasions) that software-related patents areindeed patent-eligible. SeeAre Software InventionsPatentable?

More recently, in 2019, the United States Patent and TrademarkOffice (USPTO) provided its own guidance on the topic of patentingAI inventions. See2019 Revised Patent Subject Matter EligibilityGuidance. Below we explore these examples.

As part of its 2019 Revised Patent Subject Matter EligibilityGuidance (the "2019 PEG"), the USPTO provided severalexample patent claims and respective analyses under thetwo-partAlicetest.SeeSubjectMatter Eligibility Examples: Abstract Ideas.

One of these examples ("Example 39") demonstrated apatent-eligible artificial intelligence invention. In particular,Example 39 provides an example AI hypothetic invention labeled"Method for Training a Neural Network for FacialDetection" and describes an invention for addressing issues ofolder facial recognition methods that suffered from the inabilityto robustly detect human faces in images where there are shifts,distortions, and variations in scale in scale and rotation of theface pattern in the image.

The example inventive method recites claim elements fortraininga neural networkacross twostages of training set data so as to minimize false positives forfacial detection. The claims are reproduced below:

collecting a set of digitalfacial images from a database;

applying one or moretransformations to each digital facial image includingmirroring, rotating, smoothing, or contrast reduction to create amodified set of digital facial images;

creating a first trainingset comprising the collected set of digital facial images, themodified set of digital facial images, and a set of digitalnon-facial images;

training the neural networkin a first stage using the first training set

creating a second trainingset for a second stage of training comprising the first trainingset and digital non-facial images that are incorrectly detected asfacial images after the first stage of training;and

training the neural networkin a second stage using the second training set.

The USPTO's analysis of Example 39 informs that the aboveclaim is patent-eligible (and not "directed to" anabstract idea) because the AI-specific claim elements do not recitea mere "abstract idea." SeeHow to Patent Software Inventions: Show an"Improvement". In particular, while some ofthe claim elements may be based on mathematical concepts, suchconcepts are not recited in the claim. Further, the claim does notrecite a mental process because the steps are not practicallyperformed in the human mind. Finally, the claim does not recite anymethod of organizing human activity, such as a fundamental economicconcept or meaning interactions between people. Because the claimsdo not fall into one of these three categories, then, according tothe USPTO, then the claim is patent-eligible.

As a further example, the Patent Trial and Appeal Board (PTAB)more recently applied the 2019 PEG (as revised) inanexparteappeal involving anartificial intelligence invention.Seeex parte Hannun (formerly Ex parteLinden), 2018-003323 (April 1,2019)(designated by the PTAB as an"Informative" decision).

InHannun, the patent-at-issuerelated to "systems and methods for improving thetranscription of speech into text." The claims includedseveral AI-related elements, including "a set of trainingsamples used to traina trained neural networkmodel" as used to interpret a string of charactersfor speech translation. Claim 11 of the patent-at-issue isillustrative and is reproduced below:

receiving an inputaudio from a user; normalizing the input audio to make a totalpower of the input audio consistent with a set of training samplesused to train a trained neural networkmodel;

generatinga jitter set of audio files from the normalized input audio bytranslating the normalized input audio by one or more timevalues;

for eachaudio file from the jitter set of audio files, which includes thenormalized input audio:

generatinga set of spectrogram frames for each audio file; inputting theaudio file along with a context of spectrogram frames into atrained neural network; obtaining predicted character probabilitiesoutputs from the trained neural network;and

decoding atranscription of the input audio using the predicted characterprobabilities outputs from the trained neural network constrainedby a language model that interprets a string of characters from thepredicted character probabilities outputs as a word orwords.

Applying the two-partAlicetest, theExaminer had rejected the claims finding them patent-ineligible asmerely abstract ideas (i.e., mathematical concepts and certainmethods of organizing human activity without significantlymore.)

The PTAB disagreed. While the PTAB generally agreed that thepatentspecificationincluded mathematicalformulas, such mathematical formulas were"notrecited in theclaims." (original emphasis).

Nor did the claims recite "organizing human activity,"at least because, according to the PTAB, the claims were directedto a specific implementation comprising technical elementsincluding AI and computer speech recognition.

Finally, and importantly, the PTAB noted the importance ofthespecificationdescribing how the claimedinvention provides animprovementto thetechnical field of speech recognition, with the PTAB specificallynoting that "the Specification describes thatusingDeepSpeech learning,i.e.,a trained neural network, along with alanguage model 'achieves higher performance than traditionalmethods on hard speech recognition tasks while also being muchsimpler.'"

For each of these reasons, the PTAB found the claims of thepatent-at-issue inHannunto bepatent-eligible.

Each of Example 39 and the PTAB's informative decisionofHannundemonstrates theimportance of properly drafting AI-related claims (and, in general,software-related claims) to follow a three-part pattern ofdescribing an improvement to the underlying computing invention,describe how the improvement overcomes problems experienced in theprior art, and recite the improvement in the claims. For moreinformation, seeHow to Patent Software Inventions: Show an"Improvement".

The content of this article is intended to provide a generalguide to the subject matter. Specialist advice should be soughtabout your specific circumstances.

Read the rest here:
How To Patent An Artificial Intelligence (AI) Invention: Guidance From The US Patent Office (USPTO) - Intellectual Property - United States - Mondaq...