Archive for the ‘Machine Learning’ Category

AI Ethics Tempted But Hesitant To Use AI Adversarial Attacks Against The Evils Of Machine Learning, Including For Self-Driving Cars – Forbes

AI Ethics quandary about using adversarial attacks against Machine Learning even if done for ... [+] purposes of goodness.

It is widely accepted sage wisdom to garner as much as you can about your adversaries.

Frederick The Great, the famous king of Prussia and a noted military strategist, stridently said this: Great advantage is drawn from knowledge of your adversary, and when you know the measure of their intelligence and character, you can use it to play on their weakness.

Astutely leveraging the awareness of your adversaries is both a vociferous defense and a compelling offense-driven strategy in life. On the one hand, you can be better prepared for whatever your adversary might try to destructively do to you. The other side of that coin is that you are likely able to carry out better attacks against your adversary via the known and suspected weaknesses of any vaunted foe.

Per the historically revered statesman and ingenious inventor Benjamin Franklin, those that are on their guard and appear ready to receive their adversaries are in much less danger of being attacked, much more so than otherwise being unawares, supine, and negligent in preparation.

Why all this talk about adversaries?

Because one of the biggest concerns facing much of todays AI is that cyber crooks and other evildoers are deviously attacking AI systems using what is commonly referred to as adversarial attacks. This can cause an AI system to falter and fail to perform its designated functions. As youll see in a moment, there are a variety of vexing AI Ethics and Ethical AI issues underlying the matter, such as ensuring that AI systems are protected against such scheming adversaries, see my ongoing and extensive coverage of AI Ethics at the link here and the link here, just to name a few.

Perhaps even worse than getting the AI to simply stumble, the adversarial attack can sometimes be used to get AI to perform as the wrongdoer wishes the AI to perform. The attacker can essentially trick the AI into doing the bidding of the malefactor. Whereas some adversarial attacks seek to disrupt or confound the AI, another equally if not more insidious form of deception involves getting the AI to act on the behalf of the attacker.

It is almost as though one might use a mind trick or hypnotic means to get a human to do wrong acts and yet the person is blissfully unaware that they have been fooled into doing something that they should not particularly have done. To clarify, the act that is performed does not necessarily have to be wrong per se or illegal in its merits. For example, conning a bank teller to open the safe or vault for you is not in itself a wrong or illegal act. The bank teller is doing what they legitimately are able to perform as a valid bank-approved task. Of course, if they open the vault and doing so allows a robber to steal the money and all of the gold bullion therein, the bank teller has been tricked into performing an act that they should not have undertaken in the given circumstances.

The use of adversarial attacks against AI has to a great extent arisen because of the way in which much of contemporary AI is devised. You see, this latest era of AI has tended to emphasize the use of Machine Learning (ML) and Deep Learning (DL). These are computational pattern matching techniques and technologies which have dramatically aided the advancement of modern-day AI systems. ML/DL is often used as a key element in many of the AI systems that you interact with daily, such as the use of conversational interactive systems or Natural Language Processing (NLP) akin to Alexa and Siri.

The manner in which ML/DL is designed and fielded provides a fertile opening for the leveraging of adversarial attacks. Cybercrooks generally can guess how the ML/DL was built. They can make reasoned guesses about how the ML/DL will react when put into use. There are only so many ways that ML/DL is usually constructed. As such, the evildoer hackers can try a slew of underhanded ML/DL adversarial tricks to get the AI to either go awry or do their bidding.

In contrast, during the prior era of AI systems, it was somewhat harder to undertake adversarial attacks since much of the AI was more idiosyncratic and written in a more proprietary or individualistic manner. You would have had a more challenging time trying to guess how the AI was constructed and also how it might react when placed into active use. In comparison, ML/DL is largely more predictable as to its susceptibilities (this is not always the case, and please know that I am broadly generalizing).

You might be thinking that if adversarial attacks are relatively able to be targeted specifically at ML/DL then certainly there be should a boatload of cybersecurity measures available to protect against those attacks. One would hope that those devising and releasing their AI applications would ensure that the app was securely able to fight against those adversarial attacks.

The answer is yes and no.

Yes, there exist numerous cybersecurity protections that can be used by and within ML/DL to guard against adversarial attacks. Unfortunately, the answer is also somewhat a no in that many of the AI builders are not especially versed in those protections or are not explicitly including those protections.

There are lots of reasons for this.

One is that some AI software engineers concentrate solely on the AI side and are not particularly caring about the cybersecurity elements. They figure that someone else further along in the chain of making and releasing the AI will deal with any needed cybersecurity protections. Another reason for the lack of protection against adversarial attacks is that it can be a burden of sorts to the AI project. An AI project might be under a tight deadline to get the AI out the door. Adding into the mix a bunch of cybersecurity protections that need to be crafted or set up will potentially delay the production cycle of the AI. Furthermore, the cost of creating AI is bound to go up too.

Note that none of those are satisfactory as to allow an AI system to be vulnerable to adversarial attacks. Those that are in the know would say the famous line of either pay me now or pay me later would come to play in this instance. You can skirt past the cybersecurity portions to get an AI system sooner into production, but the chances are that it will then suffer an adversarial attack. A cost-benefit analysis and ROI (return on investment) needs to be properly assessed as to whether the cost upfront and the benefits thereof are going to be more profitable against the costs to repair and deal with cybersecurity intrusions further down the pike.

There is no free lunch when it comes to making ML/DL that is well-protected against adversarial attacks.

That being said, you dont necessarily need to move heaven and earth to be moderately protected against those evildoing tricks. Savvy specialists that are versed in cybersecurity protections can pretty much sit side-by-side with the AI crews and dovetail the security into the AI as it is being devised. There is also the assumption that a well-versed AI builder can readily use AI constructing techniques and technologies that simultaneously aid their AI building and that seamlessly encompasses adversarial attack protections. To adequately do so, they usually need to know about the nature of adversarial attacks and how to best blunt or mitigate them. This is something only gradually becoming regularly instituted as part of devising AI systems.

A twist of sorts is that more and more people are getting into the arena of developing ML/DL applications. Regrettably, some of those people are not versed in AI per se, and neither are they versed in cybersecurity. The idea overall is that perhaps by making the ability to craft AI systems with ML/DL widely available to all we are aiming to democratize AI. That sounds good, but there are downsides to this popular exhortation, see my analysis and coverage at the link here.

Speaking of twists, I will momentarily get to the biggest twist of them all, namely, I am going to shock you with a recently emerging notion that some find sensible and others believe is reprehensible. Ill give you a taste of where I am heading on this heated and altogether controversial matter.

Are you ready?

There is a movement toward using adversarial attacks as a means to disrupt or fool AI systems that are being used by wrongdoers.

Let me explain.

So far, I have implied that AI is seemingly always being used in the most innocent and positive of ways and that only miscreants would wish to confound the AI via the use of adversarial attacks. But keep in mind that bad people can readily devise AI and use that AI for doing bad things.

You know how it is, whats good for the goose is good for the gander.

Criminals and cybercrooks are eagerly wising up to the building and using AI ML/DL to carry out untoward acts. When you come in contact with an AI system, you might not have any means of knowing whether it is an AI For Good versus an AI For Bad type of system. Be on the watch! Just because AI is being deployed someplace does not somehow guarantee that the AI will be crafted by well-intended builders. The AI could be deliberately devised for foul purposes.

Here then is the million-dollar question.

Should we be okay with using adversarial attacks on purportedly AI For Bad systems?

Im sure that your first thought is that we ought to indeed be willing to fight fire with fire. If AI For Good systems can be shaken up via adversarial attacks, we can use those same evildoing adversarial attacks to shake up those atrocious AI For Bad systems. We can rightfully turn the attacking capabilities into an act of goodness. Fight evil using the appalling trickery of evil. The net result would seem to be an outcome of good.

Not everyone agrees with that sentiment.

From an AI Ethics perspective, there is a lot of handwringing going on about this meaty topic. Some would argue that by leveraging adversarial attacks, even when the intent is for the good, you are perpetuating the use of adversarial attacks all-told. You are basically saying that it is okay to launch and promulgate adversarial attacks. Shame on you, they exclaim. We ought to be stamping out evil rather than encouraging or expanding upon evil (even if the evil is ostensibly aiming to offset evil and carry out the work of the good).

Those against the use of adversarial attacks would also argue that by keeping adversarial attacks in the game that you are going to merely step into a death knell of quicksand. More and stronger adversarial attacks will be devised under the guise of attacking the AI For Bad systems. That seems like a tremendously noble pursuit. The problem is that the evildoers will undoubtedly also grab hold of those emboldened and super-duper adversarial attacks and aim them squarely at the AI For Good.

You are blindly promoting the cat and mouse gambit. We might be shooting our own foot.

A retort to this position is that there are no practical means of stamping out adversarial attacks. No matter whether you want them to exist or not, the evildoers are going to make sure they do persist. In fact, the evildoers are probably going to be making the adversarial attacks more resilient and potent, doing so to overcome whatever cyber protections are put in place to block them. Thus, a proverbial head-in-the-sand approach to dreamily pretending that adversarial attacks will simply slip quietly away into the night is pure nonsense.

You could contend that adversarial attacks against AI are a double-edged sword. AI researchers have noted this quandary, as stated by these authors in a telling article in AI And Ethics journal: Sadly, AI solutions have already been utilized for various violations and theft, even receiving the name AI or Crime (AIC). This poses a challenge: are cybersecurity experts thus justified to attack malicious AI algorithms, methods and systems as well, to stop them? Would that be fair and ethical? Furthermore, AI and machine learning algorithms are prone to be fooled or misled by the so-called adversarial attacks. However, adversarial attacks could be used by cybersecurity experts to stop the criminals using AI, and tamper with their systems. The paper argues that this kind of attacks could be named Ethical Adversarial Attacks (EAA), and if used fairly, within the regulations and legal frameworks, they would prove to be a valuable aid in the fight against cybercrime (article by Micha Chora and Micha Woniak, The Double-Edged Sword Of AI: Ethical Adversarial Attacks To Counter Artificial Intelligence For Crime).

Id ask you to mull this topic over and render a vote in your mind.

Is it unethical to use AI adversarial attacks against AI For Bad, or can we construe this as an entirely unapologetic Ethical AI practice?

You might be vaguely aware that one of the loudest voices these days in the AI field and even outside the field of AI consists of clamoring for a greater semblance of Ethical AI. Lets take a look at what it means to refer to AI Ethics and Ethical AI. On top of that, we can set the stage by looking at some examples of adversarial attacks to establish what I mean when I speak of Machine Learning and Deep Learning.

One particular segment or portion of AI Ethics that has been getting a lot of media attention consists of AI that exhibits untoward biases and inequities. You might be aware that when the latest era of AI got underway there was a huge burst of enthusiasm for what some now call AI For Good. Unfortunately, on the heels of that gushing excitement, we began to witness AI For Bad. For example, various AI-based facial recognition systems have been revealed as containing racial biases and gender biases, which Ive discussed at the link here.

Efforts to fight back against AI For Bad are actively underway. Besides vociferous legal pursuits of reining in the wrongdoing, there is also a substantive push toward embracing AI Ethics to righten the AI vileness. The notion is that we ought to adopt and endorse key Ethical AI principles for the development and fielding of AI doing so to undercut the AI For Bad and simultaneously heralding and promoting the preferable AI For Good.

On a related notion, I am an advocate of trying to use AI as part of the solution to AI woes, fighting fire with fire in that manner of thinking. We might for example embed Ethical AI components into an AI system that will monitor how the rest of the AI is doing things and thus potentially catch in real-time any discriminatory efforts, see my discussion at the link here. We could also have a separate AI system that acts as a type of AI Ethics monitor. The AI system serves as an overseer to track and detect when another AI is going into the unethical abyss (see my analysis of such capabilities at the link here).

In a moment, Ill share with you some overarching principles underlying AI Ethics. There are lots of these kinds of lists floating around here and there. You could say that there isnt as yet a singular list of universal appeal and concurrence. Thats the unfortunate news. The good news is that at least there are readily available AI Ethics lists and they tend to be quite similar. All told, this suggests that by a form of reasoned convergence of sorts that we are finding our way toward a general commonality of what AI Ethics consists of.

First, lets cover briefly some of the overall Ethical AI precepts to illustrate what ought to be a vital consideration for anyone crafting, fielding, or using AI.

For example, as stated by the Vatican in the Rome Call For AI Ethics and as Ive covered in-depth at the link here, these are their identified six primary AI ethics principles:

As stated by the U.S. Department of Defense (DoD) in their Ethical Principles For The Use Of Artificial Intelligence and as Ive covered in-depth at the link here, these are their six primary AI ethics principles:

Ive also discussed various collective analyses of AI ethics principles, including having covered a set devised by researchers that examined and condensed the essence of numerous national and international AI ethics tenets in a paper entitled The Global Landscape Of AI Ethics Guidelines (published in Nature), and that my coverage explores at the link here, which led to this keystone list:

As you might directly guess, trying to pin down the specifics underlying these principles can be extremely hard to do. Even more so, the effort to turn those broad principles into something entirely tangible and detailed enough to be used when crafting AI systems is also a tough nut to crack. It is easy to overall do some handwaving about what AI Ethics precepts are and how they should be generally observed, while it is a much more complicated situation in the AI coding having to be the veritable rubber that meets the road.

The AI Ethics principles are to be utilized by AI developers, along with those that manage AI development efforts, and even those that ultimately field and perform upkeep on AI systems. All stakeholders throughout the entire AI life cycle of development and usage are considered within the scope of abiding by the being-established norms of Ethical AI. This is an important highlight since the usual assumption is that only coders or those that program the AI are subject to adhering to the AI Ethics notions. As earlier stated, it takes a village to devise and field AI, and for which the entire village has to be versed in and abide by AI Ethics precepts.

Lets also make sure we are on the same page about the nature of todays AI.

There isnt any AI today that is sentient. We dont have this. We dont know if sentient AI will be possible. Nobody can aptly predict whether we will attain sentient AI, nor whether sentient AI will somehow miraculously spontaneously arise in a form of computational cognitive supernova (usually referred to as the singularity, see my coverage at the link here).

The type of AI that I am focusing on consists of the non-sentient AI that we have today. If we wanted to wildly speculate about sentient AI, this discussion could go in a radically different direction. A sentient AI would supposedly be of human quality. You would need to consider that the sentient AI is the cognitive equivalent of a human. More so, since some speculate we might have super-intelligent AI, it is conceivable that such AI could end up being smarter than humans (for my exploration of super-intelligent AI as a possibility, see the coverage here).

Lets keep things more down to earth and consider todays computational non-sentient AI.

Realize that todays AI is not able to think in any fashion on par with human thinking. When you interact with Alexa or Siri, the conversational capacities might seem akin to human capacities, but the reality is that it is computational and lacks human cognition. The latest era of AI has made extensive use of Machine Learning (ML) and Deep Learning (DL), which leverage computational pattern matching. This has led to AI systems that have the appearance of human-like proclivities. Meanwhile, there isnt any AI today that has a semblance of common sense and nor has any of the cognitive wonderment of robust human thinking.

ML/DL is a form of computational pattern matching. The usual approach is that you assemble data about a decision-making task. You feed the data into the ML/DL computer models. Those models seek to find mathematical patterns. After finding such patterns, if so found, the AI system then will use those patterns when encountering new data. Upon the presentation of new data, the patterns based on the old or historical data are applied to render a current decision.

I think you can guess where this is heading. If humans that have been making the patterned upon decisions have been incorporating untoward biases, the odds are that the data reflects this in subtle but significant ways. Machine Learning or Deep Learning computational pattern matching will simply try to mathematically mimic the data accordingly. There is no semblance of common sense or other sentient aspects of AI-crafted modeling per se.

Furthermore, the AI developers might not realize what is going on either. The arcane mathematics in the ML/DL might make it difficult to ferret out the now hidden biases. You would rightfully hope and expect that the AI developers would test for the potentially buried biases, though this is trickier than it might seem. A solid chance exists that even with relatively extensive testing that there will be biases still embedded within the pattern matching models of the ML/DL.

You could somewhat use the famous or infamous adage of garbage-in garbage-out. The thing is, this is more akin to biases-in that insidiously get infused as biases submerged within the AI. The algorithm decision-making (ADM) of AI axiomatically becomes laden with inequities.

Not good.

I trust that you can readily see how adversarial attacks fit into these AI Ethics matters. Evildoers are undoubtedly going to use adversarial attacks against ML/DL and other AI that is supposed to be doing AI For Good. Meanwhile, those evildoers are indubitably going to be devising AI For Bad that they foster upon us all. To try and fight against those AI For Bad systems, we could arm ourselves with adversarial attacks. The question is whether we are doing more good or more harm by leveraging and continuing the advent of adversarial attacks.

Time will tell.

One vexing issue is that there is a myriad of adversarial attacks that can be used against AI ML/DL. You might say there are more than you can shake a stick at. Trying to devise protective cybersecurity measures to negate all of the various possible attacks is somewhat problematic. Just when you might think youve done a great job of dealing with one type of adversarial attack, your AI might get blindsided by a different variant. A determined evildoer is likely to toss all manner of adversarial attacks at your AI and be hoping that at least one or more sticks. Of course, if we are using adversarial attacks against AI For Bad, we too would take the same advantageous scattergun approach.

Some of the most popular types of adversarial attacks include:

At this juncture of this weighty discussion, Id bet that you are desirous of some illustrative examples that might showcase the nature and scope of adversarial attacks against AI and particularly aimed at Machine Learning and Deep Learning. There is a special and assuredly popular set of examples that are close to my heart. You see, in my capacity as an expert on AI including the ethical and legal ramifications, I am frequently asked to identify realistic examples that showcase AI Ethics dilemmas so that the somewhat theoretical nature of the topic can be more readily grasped. One of the most evocative areas that vividly presents this ethical AI quandary is the advent of AI-based true self-driving cars. This will serve as a handy use case or exemplar for ample discussion on the topic.

Heres then a noteworthy question that is worth contemplating: Does the advent of AI-based true self-driving cars illuminate anything about the nature of adversarial attacks against AI, and if so, what does this showcase?

Allow me a moment to unpack the question.

First, note that there isnt a human driver involved in a true self-driving car. Keep in mind that true self-driving cars are driven via an AI driving system. There isnt a need for a human driver at the wheel, nor is there a provision for a human to drive the vehicle. For my extensive and ongoing coverage of Autonomous Vehicles (AVs) and especially self-driving cars, see the link here.

Id like to further clarify what is meant when I refer to true self-driving cars.

Understanding The Levels Of Self-Driving Cars

As a clarification, true self-driving cars are ones where the AI drives the car entirely on its own and there isnt any human assistance during the driving task.

These driverless vehicles are considered Level 4 and Level 5 (see my explanation at this link here), while a car that requires a human driver to co-share the driving effort is usually considered at Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-ons that are referred to as ADAS (Advanced Driver-Assistance Systems).

There is not yet a true self-driving car at Level 5, and we dont yet even know if this will be possible to achieve, nor how long it will take to get there.

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some contend, see my coverage at this link here).

Since semi-autonomous cars require a human driver, the adoption of those types of cars wont be markedly different than driving conventional vehicles, so theres not much new per se to cover about them on this topic (though, as youll see in a moment, the points next made are generally applicable).

For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect thats been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.

You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.

Self-Driving Cars And Adversarial Attacks Against AI

For Level 4 and Level 5 true self-driving vehicles, there wont be a human driver involved in the driving task.

All occupants will be passengers.

The AI is doing the driving.

One aspect to immediately discuss entails the fact that the AI involved in todays AI driving systems is not sentient. In other words, the AI is altogether a collective of computer-based programming and algorithms, and most assuredly not able to reason in the same manner that humans can.

Why is this added emphasis about the AI not being sentient?

Because I want to underscore that when discussing the role of the AI driving system, I am not ascribing human qualities to the AI. Please be aware that there is an ongoing and dangerous tendency these days to anthropomorphize AI. In essence, people are assigning human-like sentience to todays AI, despite the undeniable and inarguable fact that no such AI exists as yet.

With that clarification, you can envision that the AI driving system wont natively somehow know about the facets of driving. Driving and all that it entails will need to be programmed as part of the hardware and software of the self-driving car.

Lets dive into the myriad of aspects that come to play on this topic.

First, it is important to realize that not all AI self-driving cars are the same. Each automaker and self-driving tech firm is taking its approach to devising self-driving cars. As such, it is difficult to make sweeping statements about what AI driving systems will do or not do.

Furthermore, whenever stating that an AI driving system doesnt do some particular thing, this can, later on, be overtaken by developers that in fact program the computer to do that very thing. Step by step, AI driving systems are being gradually improved and extended. An existing limitation today might no longer exist in a future iteration or version of the system.

I hope that provides a sufficient litany of caveats to underlie what I am about to relate.

As earlier mentioned, some of the most popular types of adversarial attacks include:

We can showcase the nature of each such adversarial attack and do so in the context of AI-based self-driving cars.

Adversarial Falsification Attacks

Consider the use of adversarial falsifications.

There are generally two such types: (1) false-positive attacks, and (2) false-negative attacks. In the false-positive attack, the emphasis is on presenting to AI a so-called negative sample that is then incorrectly classified by the ML/DL as a positive one. The jargon for this is that it is a Type I effort (this is reminiscent perhaps of your days of taking a statistics class in college). In contrast, the false-negative attack entails presenting a positive sample for which the ML/DL incorrectly classifies as a negative instance, known as a Type II error.

Suppose that we had trained an AI driving system to detect Stop signs. We used an ML/DL that we had trained beforehand with thousands of images that contained Stop signs. The idea is that we would be using video cameras on the self-driving car to collect video and images of the roadway scene surrounding the autonomous vehicle during a driving journey. As the digital imagery real-time streams into an onboard computer, the ML/DL scans the digital data to detect any indication of a nearby Stop sign. The detection of a Stop sign is obviously crucial for the AI driving system. If a Stop sign is detected by the ML/DL, this is conveyed to the AI driving system and the AI would need to ascertain a suitable means to use the driving controls to bring the self-driving car to a proper and safe stop.

Humans seem to readily be able to detect Stop signs, at least most of the time. Our human perception of such signs is keenly honed by our seemingly innate cognitive pattern matching capacities. All we need to do is learn what a Stop sign looks like and we take things from there. A toddler learns soon enough that a Stop sign is typically red in color, contains the word STOP in large letters, has a special rectangular shape, usually is posted adjacent to the roadway and resides at a persons height, and so on.

Imagine an evildoer that wants to make trouble for self-driving cars.

In a false-positive adversarial attack, the wrongdoer would try to trick the ML/DL into computationally calculating that a Stop sign exists even when there isnt a Stop sign present. Maybe the wrongdoer puts up a red sign along a roadway that looks generally similar to a Stop sign but lacks the word STOP on it. A human would likely realize that this is merely a red sign and not a driving directive. The ML/DL might though calculate that the sign resembles sufficiently enough a Stop sign to the degree that the AI ought to consider the sign as in fact a Stop sign.

You might be tempted to think that this is not much of an adversarial attack and that it seems rather innocuous. Well, suppose that you are driving in a car and meanwhile a self-driving car that is ahead of you suddenly and seemingly without any basis for doing so comes to an abrupt stop (due to having misconstrued a red sign near the roadway as being a Stop sign). You might ram into that self-driving car. It could be that the AI was fooled into computationally calculating that a non-stop sign was a Stop sign, thus committing a false-positive error. You get injured, the passengers in the self-driving car get injured, and perhaps even pedestrians get injured by this dreadful false-positive adversarial attack.

A false-negative adversarial attack is somewhat akin to this preceding depiction though based on tricking the ML/DL into incorrectly misclassifying in the other direction, as it were. Imagine that a Stop sign is sitting next to the roadway and for all usual visual reasons seems to be a Stop sign. Humans accept that this is indeed a valid Stop sign.

Read the original:
AI Ethics Tempted But Hesitant To Use AI Adversarial Attacks Against The Evils Of Machine Learning, Including For Self-Driving Cars - Forbes

Learning to think critically about machine learning | MIT News | Massachusetts Institute of Technology – MIT News

Students in the MIT course 6.036 (Introduction to Machine Learning) study the principles behind powerful models that help physicians diagnose disease or aid recruiters in screening job candidates.

Now, thanks to the Social and Ethical Responsibilities of Computing (SERC) framework, these students will also stop to ponder the implications of these artificial intelligence tools, which sometimes come with their share of unintended consequences.

Last winter, a team of SERC Scholars worked with instructor Leslie Kaelbling, the Panasonic Professor of Computer Science and Engineering, and the 6.036 teaching assistants to infuse weekly labs with material covering ethical computing, data and model bias, and fairness in machine learning. The process was initiated in the fall of 2019 by Jacob Andreas, the X Consortium Assistant Professor in the Department of Electrical Engineering and Computer Science. SERC Scholars collaborate in multidisciplinary teams to help postdocs and faculty develop new course material.

Because 6.036 is such a large course, more than 500 students who were enrolled in the 2021 spring term grappled with these ethical dimensions alongside their efforts to learn new computing techniques. For some, it may have been their first experience thinking critically in an academic setting about the potential negative impacts of machine learning.

The SERC Scholars evaluated each lab to develop concrete examples and ethics-related questions to fit that weeks material. Each brought a different toolset. Serena Booth is a graduate student in the Interactive Robotics Group of the Computer Science and Artificial Intelligence Laboratory (CSAIL). Marion Boulicault was a graduate student in the Department of Linguistics and Philosophy, and is now a postdoc in the MIT Schwarzman College of Computing, where SERC is based. And Rodrigo Ochigame was a graduate student in the Program in History, Anthropology, and Science, Technology, and Society (HASTS) and is now an assistant professor at Leiden University in the Netherlands. They collaborated closely with teaching assistant Dheekshita Kumar, MEng 21, who was instrumental in developing the course materials.

They brainstormed and iterated on each lab, while working closely with the teaching assistants to ensure the content fit and would advance the core learning objectives of the course. At the same time, they helped the teaching assistants determine the best way to present the material and lead conversations on topics with social implications, such as race, gender, and surveillance.

In a class like 6.036, we are dealing with 500 people who are not there to learn about ethics. They think they are there to learn the nuts and bolts of machine learning, like loss functions, activation functions, and things like that. We have this challenge of trying to get those students to really participate in these discussions in a very active and engaged way. We did that by tying the social questions very intimately with the technical content, Booth says.

For instance, in a lab on how to represent input features for a machine learning model, they introduced different definitions of fairness, asked students to consider the pros and cons of each definition, then challenged them to think about the features that should be input into a model to make it fair.

Four labs have now been published on MIT OpenCourseWare. A new team of SERC Scholars is revising the other eight, based on feedback from the instructors and students, with a focus on learning objectives, filling in gaps, and highlighting important concepts.

An intentional approach

The students efforts on 6.036 show how SERC aims to work with faculty in ways that work for them, says Julie Shah, associate dean of SERC and professor of aeronautics and astronautics. They adapted the SERC process due to the unique nature of this large course and tight time constraints.

SERC was established more than two years ago through the MIT Schwarzman College of Computing as an intentional approach to bring faculty from divergent disciplines together into a collaborative setting to co-create and launch new course material focused on social and responsible computing.

Each semester, the SERC team invites about a dozen faculty members to join an Action Group dedicated to developing new curricular materials (there are several SERC Action Groups, each with a different mission). They are purposeful in whom they invite, and seek to include faculty members who will likely form fruitful partnerships in smaller subgroups, says David Kaiser, associate dean of SERC, the Germeshausen Professor of the History of Science, and professor of physics.

These subgroups of two or three faculty members hone their shared interest over the course of the term to develop new ethics-related material. But rather than one discipline serving another, the process is a two-way street; every faculty member brings new material back to their course, Shah explains. Faculty are drawn to the Action Groups from all of MITs five schools.

Part of this involves going outside your normal disciplinary boundaries and building a language, and then trusting and collaborating with someone new outside of your normal circles. Thats why I think our intentional approach has been so successful. It is good to pilot materials and bring new things back to your course, but building relationships is the core. That makes this something valuable for everybody, she says.

Making an impact

Over the past two years, Shah and Kaiser have been impressed by the energy and enthusiasm surrounding these efforts.

They have worked with about 80 faculty members since the program started, and more than 2,100 students took courses that included new SERC content in the last year alone. Those students arent all necessarily engineers about 500 were exposed to SERC content through courses offered in the School of Humanities, Arts, and Social Sciences, the Sloan School of Management, and the School of Architecture and Planning.

Central to SERC is the principle that ethics and social responsibility in computing should be integrated into all areas of teaching at MIT, so it becomes just as relevant as the technical parts of the curriculum, Shah says. Technology, and AI in particular, now touches nearly every industry, so students in all disciplines should have training that helps them understand these tools, and think deeply about their power and pitfalls.

It is not someone elses job to figure out the why or what happens when things go wrong. It is all of our responsibility and we can all be equipped to do it. Lets get used to that. Lets build up that muscle of being able to pause and ask those tough questions, even if we cant identify a single answer at the end of a problem set, Kaiser says.

For the three SERC Scholars, it was uniquely challenging to carefully craft ethical questions when there was no answer key to refer to. But thinking deeply about such thorny problems also helped Booth, Boulicault, and Ochigame learn, grow, and see the world through the lens of other disciplines.

They are hopeful the undergraduates and teaching assistants in 6.036 take these important lessons to heart, and into their future careers.

I was inspired and energized by this process, and I learned so much, not just the technical material, but also what you can achieve when you collaborate across disciplines. Just the scale of this effort felt exciting. If we have this cohort of 500 students who go out into the world with a better understanding of how to think about these sorts of problems, I feel like we could really make a difference, Boulicault says.

Read more:
Learning to think critically about machine learning | MIT News | Massachusetts Institute of Technology - MIT News

Machine Learning Application in the Manufacturing Industry – IoT For All

Manufacturers, to keep up with the latest changes in technology, need to explore one of the most critical elements driving factories forward into the future: machine learning. Lets talk about the most important applications and innovations that ML technology is providing in 2022.

Machine learning is a subfield of artificial intelligence, but not all AI technologies count as machine learning. There are various other types of AI that play a role in many industries, such as robotics, natural language processing, and computer vision. If youre curious about how these technologies affect the manufacturing industry, check out our review below.

Basically, machine learning algorithms utilize training data to power an algorithm that allows the software to solve a problem. This data may come from real-time IoT sensors on a factory floor, or it may come from other methods. Machine learning has a variety of methods such as neural networks and deep learning. Neural networks imitate biological neurons to discover patterns in a dataset to solve problems. Deep learning utilizes various layers of neural networks, where the first layer utilizes raw data input and passes processed information from one layer to the next.

Lets start by imagining a box with assembly robots, IoT sensors, and other automated machinery. At one end you supply the materials necessary to complete the product; at the other end, the product rolls off the assembly line. The only intervention needed for this device is routine maintenance of the equipment inside. This is the ideal future of manufacturing, and machine learning can help us understand the full picture of how to achieve this.

Aside from the advanced robotics necessary for automated assembly to work, machine learning can help ensure: quality assurance, NDT analysis, and localizing the causes of defects, among other things.

You can think of this factory in a box example as a way of simplifying a larger factory, but in some cases its quite literal.Nokiais utilizing portable manufacturing sites in the form of retrofitted shipping containers with advanced automated assembly equipment. You can use these portable containers in any location necessary, allowing manufacturers to assemble products on site instead of needing to transport the products longer distances.

Using neural networks, high optical resolution cameras, and powerfulGPUs, real-time video processing combined with machine learning and computer vision can complete visual inspection tasks better than humans can. This technology ensures that the factory in a box is working correctly and that unusable products are eliminated from the system.

In the past, machine learnings use in video analysis has been criticized for the quality of video used. This is because images can be blurry from frame to frame, and the inspection algorithm may be subject to more errors. With high-quality cameras and greater graphical processing power, however, neural networks can more efficiently search for defects in real-time without human intervention.

Using various IoT sensors, machine learning can help test the created products without damaging them. An algorithm can search for patterns in the real-time data that correlate with a defective version of the unit, enabling the system to flag potentially unwanted products.

Another way that we can detect defects in materials is through non-destructive testing. This involves measuring a materials stability and integrity without causing damage. For example, you can use an ultrasound machine to detect anomalies like cracks in a material. The machine can measure data that humans can analyze to look for these outliers by hand.

However, outlier detection algorithms, object detection algorithms, and segmentation algorithms can automate this process by analyzing the data for recognizable patterns that humans may not be able to see with much greater efficiency. Machine learning is also not subject to the same number of errors that humans are prone to make.

One of the core tenants of machine learnings role in manufacturing is predictive maintenance. PwCreportedthat predictive maintenance will be one of the largest growing machine learning technologies in manufacturing, having an increase of 38 percent in market value from 2020 to 2025.

With unscheduled maintenance having the potential to deeply cut into a businesss bottom line, predictive maintenance can enable factories to make appropriate adjustments and corrections before machinery can experience more costly failures. We want to make sure that our factory in a box will have as much uptime with the fewest delays possible, and predictive maintenance can make that happen.

Extensive IoT sensors that record vital information about the operating conditions and status of a machine make predictive maintenance possible. This may include humidity, temperature, and more.

A machine learning algorithm can analyze patterns in data collected over time and reasonably predict when the machine may need maintenance. There are several approaches to achieve this goal:

Thanks to the IoT sensors powering predictive maintenance, machine learning can analyze the patterns in the data to see what parts of the machine need to be maintained to prevent a failure. If certain patterns lead to a trend of defects, its possible that hardware or software behaviors can be identified as causes of those defects. From here, engineers can come up with solutions to correct the system to avoid those defects in the future. This enables us to reduce the margin of error of our factory in a box scenario.

Digital twins are a virtual recreation of the production process based on data from IoT sensors and real-time data. They can be created as an original hypothetical representation of a system that doesnt yet exist, or they could be a recreation of an existing system.

The digital twin is a sandbox for experimentation in which machine learning can be used to analyze patterns in a simulation to optimize the environment. This helps support quality assurance and predictive maintenance efforts as well. We can also use machine learning alongside digital twins for layout optimization. This works when planning the layout of a factory or for optimizing the existing layout.

If we want to optimize every part of the factory, we also need to pay attention to the energy that it requires. The most common way to do this is to use sequential data measurements, which can be analyzed by data scientists with machine learning algorithms powered by autoregressive models and deep neural networks.

Weve used machine learning to optimize the factorys production processes, but what about the product itself? BMWintroducedthe BMW iX Flow at CES 2022 with a special e-ink wrap that can allow it to change the color (or more accurately, the shade) of the car between black and white. BMW explained that Generative design processes are implemented to ensure the segments reflect the characteristic contours of the vehicle and the resulting variations in light and shadow.

Generative design is where machine learning is used to optimize the design of a product, whether it be an automobile, electronic device, toy, or other items. With data and a desired goal, machine learning can cycle through all possible arrangements to find the best design.

ML algorithms can be trained to optimize a design for weight, shape, durability, cost, strength, and even aesthetic parameters.

Generative design process can be based on these algorithms:

Lets step away from the factory in a box example for a bit and look at a broader picture of needs in manufacturing. Production is only one element. The supply chain roles from a manufacturing center are also being improved with machine learning technologies, such as logistics route optimization and warehouse inventory control. These make up a cognitive supply chain that continues to evolve in the manufacturing industry.

AI-powered logistics solutions use object detection models instead of barcode detection, thus replacing manual scanning. Computer vision systems can detect shortages and overstock. By identifying these patterns, managers can be made aware of actionable situations. Computers can even be left to take action automatically to optimize inventory storage.

At MobiDev, we have researched a use case of creating a system capable of detecting objects for logistics. Read more aboutobject detection using small datasetsfor automated items counting in logistics.

How much should a factory produce and ship out? This is a question that can be difficult to answer. However, with access to appropriate data, machine learning algorithms can help factories understand how much they should be making without overproducing. The future of machine learning in manufacturing depends on innovative decisions.

Visit link:
Machine Learning Application in the Manufacturing Industry - IoT For All

FDA Issues Advisory on Use of AI and Machine Learning for Large Vessel Occlusion in the Brain – Diagnostic Imaging

Suggesting that some radiologists may not be aware of the intended use of computer-aided triage and notification (CADt) devices, the Food and Drug Administration (FDA) has issued an advisory on the use of the imaging software for patients with suspected large vessel occlusion (LVO) in the brain.

Emphasizing proper use of CADt software, the FDA notes these devices are not intended to substitute for diagnostic assessment by radiologists. While CADt devices can help flag and prioritize brain imaging with findings that are suspicious for LVO, the advisory points out that an LVO, a common cause of acute ischemic strokes, may still be present even if it is not flagged by the CADt imaging software.

If there is any potential over-reliance on CADt software, Vivek Bansal, MD said it may stem from a team of health-care providers striving to do the right thing for the patient under tight time constraints. While interventionalists, neurosurgeons and neurologists all have strong knowledge of brain vessels, there may be different levels of experience, according to Dr. Bansal, the national subspecialty lead for neuroradiology at Radiology Partners. He added that while these specialists look closely at images they take in the operating suite, they may not look at the actual CT images to the same level.

In regard to the imaging, Dr. Bansal said one may be looking at tiny branching vessels that are diving up and down into different slices of the images, and you have to scroll up and down to really trace them out vessel by vessel. This can be challenging and particularly hard to do on a smartphone in a brightly lit room, pointed out Dr. Bansal.

The clock is ticking, and time is brain. We are trying to race against the clock because every minute we take to arrive at a diagnosis, more brain cells may be dying (if the patient has a clot). The quicker we can get them to a diagnosis and the patient gets to a cath lab, the better the outcomes for the patient. I think that is the biggest challenge: trying to do something that is very meticulous in a very small amount of time, explained Dr. Bansal.

The FDA advisory also maintained that it is important to have awareness of the design capabilities of different CADt devices, many of which have artificial intelligence (AI) or machine learning technology, For example, the FDA cautioned that LVO CADt devices may not assess all intracranial vessels. Dr. Bansal said this is an important distinction with AI tools.

While some AI tools are very good at looking at an M1 occlusion, which is the proximal part of the middle cerebral artery, the newer AI tools are capable of looking at M2 occlusions with proximal anterior cerebral artery (ACA) and posterior cerebral artery (PCA) occlusions. All of these things are important in terms of patient care, maintained Dr. Bansal, who is affiliated with the East Houston Pathology Group in Texas.

Dr. Bansal said the key is understanding the role of AI-enabled devices and their value in triaging cases.

At any given moment, I might have 40 stat exams on my list. Im cranking through them as fast as I can but if AI tools are saying 'Hey, look at this one next, whether it is a potential large vessel occlusion or brain bleed, that is very helpful, suggested Dr. Bansal. Where we are at right now, I think that the only way we can look at AI is to look at it as a triaging tool.

Continue reading here:
FDA Issues Advisory on Use of AI and Machine Learning for Large Vessel Occlusion in the Brain - Diagnostic Imaging

AdTheorent Uses Machine Learning-Powered Predictive Advertising to Boost Donations and Drive Awareness for American Cancer Society – PR Newswire

AdTheorent's performance-first platform drove a 68% engagement rate and delivered a Return on Ad Spend that exceeded benchmark by 117%

NEW YORK, April 14, 2022 /PRNewswire/ -- AdTheorent Holding Company, Inc. ("AdTheorent" or the "Company") (Nasdaq: ADTH), a leading programmatic digital advertising company using advanced machine learning technology and privacy-forward solutions to deliver measurable value for advertisers and marketers, today announced campaign results from a recent digital fundraising campaign for American Cancer Society (ACS). The campaign goal was to drive cost-effective donations and positive Return on Ad Spend (RoAS), as well as raise awareness of ACS. The campaign drove strong donations revenue, yielding an overall campaign RoAS which was 2-times more efficient than the ACS target benchmark.

The Approach:

AdTheorent worked with Tombras, media agency of record for ACS, to drive efficient donations and achieve a strong RoAS, in addition to increasing awareness of the brand's core areas of focus, including: advocacy, discovery, and patient support. In order to achieve the dual-pronged objectives, AdTheorent leveraged a mix of cross-device rich media, interactive banners and display tactics, targeted using AdTheorent's advanced predictive advertising platform. AdTheorent developed custom machine learning models fueled by non-individualized statistics to identify and reach consumers with the highest likelihood of completing the required campaign actions. AdTheorent's programmatic performance optimizers utilized myriad signals in the custom predictive models such as ad position, publisher, geo-intelligence, non-individualized user device attributes, location DMA, time of day, connection signal and many others to find the most qualified users and reach ACS' target audience of prospective donors, current donors, and lapsed donors, with a national footprint. Additionally, AdTheorent utilized real-time contextual signals to identify and reach consumers engaging with content related to ACS or charitable donations. Through in-unit pixel placement, user engagement fueled targeting allowing AdTheorent to optimize in real-time and scale targeting to drive results for each targeting tactic.

"Every dollar raised helps the American Cancer Society improve the lives of people with cancer and their families as the only organization that integrates advocacy, discovery and direct patient support," said Ben Devore, Director, Media Strategy at ACS. "Every bit of our campaign spend needs to be optimized for the best possible performance, so our key advertising goal was to reach the most probable donors, and then engage them in a way that would drive donations. AdTheorent helped us outperform our KPIs, with a very efficient return on ad spend and an exceptionally high engagement rate of nearly 70% throughout the duration of the campaign which helps our organization achieve greater impact, overall."

The Results:

The campaign exceeded all benchmarks across all tactics:

AdTheorent's data driven-platform identified targeting variables which yielded conversion lift, providing valuable insights for future flights of the campaign.

"AdTheorent Predictive Advertising uses advanced machine learning and data science to drive real-world performance and advertiser ROI in the most privacy-forward and efficient manner," said James Lawson, CEO at AdTheorent. "We are honored to work with Tombras and ACS to further ACS's vital mission. And we are proud of the results we have helped produce, driving donation revenue at an efficiency rate 2X greater than ACS expectations."

About AdTheorent

AdTheorent uses advanced machine learning technology and privacy-forward solutions to deliver impactful advertising campaigns for marketers.AdTheorent's industry-leading machine learning platform powers its predictive targeting, geo-intelligence,audience extension solutions and in-house creative capability, Studio AT.Leveraging only non-sensitive data and focused on the predictive value of machine learning models, AdTheorent'sproduct suite and flexible transaction models allow advertisers to identify the most qualified potential consumers coupled with the optimal creative experience todeliver superior results, measured by each advertiser's real-world business goals.

AdTheorent is consistently recognized with numerous technology, product, growth and workplace awards. AdTheorent was awarded "Best AI-Based Advertising Solution" (AI Breakthrough Awards) and "Most Innovative Product" (B.I.G. Innovation Awards) for four consecutive years. Additionally, AdTheorent is the only six-time recipient of Frost & Sullivan's "Digital Advertising Leadership Award."AdTheorent is headquartered in New York, with fourteen offices across the United States and Canada. For more information, visit adtheorent.com.

About Tombras

Tombras is a 430+ person full service, independent advertising agency headquartered in Knoxville, Tennessee connecting data and creativity for business results. Named a FastCo Most Innovative Company, to the AdAge A-List and a Most Effective Independent Agency per Effie Worldwide. Tombras is one of the fastest growing full-service independent agencies with offices in New York, Atlanta, Washington, D.C., Charlotte, NC, and headquarters in Knoxville. Tombras works with notable brands including American Cancer Society, Big Lots, MoonPie, Mozilla Firefox, Orangetheory Fitness, Pernod Ricard and others. More information:tombras.com.

About American Cancer Society

The American Cancer Society is on a mission to free the world from cancer. We invest in lifesaving research, provide 24/7 information and support, and work to ensure that individuals in every community have access to cancer prevention, detection, and treatment. For more information, visit cancer.org.

SOURCE AdTheorent

Read more:
AdTheorent Uses Machine Learning-Powered Predictive Advertising to Boost Donations and Drive Awareness for American Cancer Society - PR Newswire