Archive for the ‘Artificial Intelligence’ Category

Dangers & Risks of Artificial Intelligence – ITChronicles

Due to hype and popular fiction, the dangers of artificial intelligence (AI) are typically associated in the public eye with Sci-Fi horror scenarios. These often involve killer robots and hyper-intelligent computer systems which consider humanity a nuisance that needs to be gotten rid of for the good of the planet. While nightmares like this often play out as overblown and silly in comic books and on-screen, the risks of artificial intelligence cannot be dismissed so lightly and AI dangers do exist.

In this article, well be looking at some of the real risks of artificial intelligence, and why AI is dangerous when looked at in certain contexts or wrongly applied.

Artificial intelligence encompasses a range of technologies and systems ranging from Googles search algorithms, through smart home gadgets, to military-grade autonomous weapons. So issuing a blanket confirmation or denial to the question Is Artificial Intelligence Dangerous? isnt that simple the issue is much more nuanced than that.

Most artificial intelligence systems today qualify as weak or narrow AI technologies designed to perform specific tasks such as searching the internet, responding to environmental changes like temperature, or facial recognition. Generally speaking, narrow AI performs better than humans at those specific tasks.

For some AI developers, however, the Holy Grail is strong AI or artificial general intelligence (AGI), a level of technology at which machines would have a much greater degree of autonomy and versatility, enabling them to outperform humans in almost all cognitive tasks.

While the super intelligence of strong AI has the potential to help us eradicate war, disease, and poverty, there are significant dangers of artificial intelligence at this level. However, there are those who question whether strong AI will ever be achieved, and others who maintain that if and when it does arrive, it can only be beneficial.

Optimism aside, the increasing sophistication of technologies and algorithms may have the result that AI is dangerous if its goals and implementation run contrary to our own expectations or objectives. The risks of AI in this context may hold even at the level of narrow or weak AI. If, for example, a home or in-vehicle thermostat system is poorly configured or hacked, its operation could pose a serious hazard to human health through over-heating or freezing. The same would apply to smart city management systems or autonomous vehicle steering mechanisms.

Most researchers agree that a strong or AGI system would be unlikely to exhibit human emotions such as love or hate, and would therefore not pose AI dangers through benevolent or malevolent intentions. However, even the strongest AI must be programmed by humans initially, and its in this context that the danger lies. Specifically, artificial intelligence analysts highlight two scenarios where the underlying programming or human intent of a system design could cause problems:

This threat covers all existing and future autonomous weapons systems (military drones, robots, missile defenses, etc.), or technologies capable of intentionally or unintentionally causing massive harm or physical destruction due to misuse, hacking, or sabotage.

Besides the prospect of an AI arms race and the possibility of AI-enabled warfare in the case of autonomous weaponry, there are AI risks posed by the design and deployment of the technology itself. With high stakes activity an inherent part of military design, such systems would probably have fail-safes that make them extremely difficult to deactivate once started and their human owners could conceivably lose control of them, in escalating situations.

The classic illustration of this AI danger comes in the example of a self-driving car. If you ask such a vehicle to take you to the airport as quickly as possible, it could quite literally do so breaking every traffic law in the book, causing accidents, and freaking you out completely, in the process.

At the super intelligence level of AGI, imagine a geo-engineering or climate control system thats given free rein to implement its programming in the most efficient manner possible. The damage it could cause to infrastructure and ecosystems could be catastrophic.

How dangerous is AI? At its current rate of development, artificial intelligence has already exceeded the expectations of many observers, with milestones having been achieved that were considered decades away, just a few years ago.

While some experts still estimate that the development of human-level AI is still centuries away, most researchers are coming round to the opinion that it could happen before 2060. And the prevailing view amongst all observers is that, as long as were not 100% sure that artificial general intelligence wont happen this century, its a good idea to start safety research now, to prepare for its arrival.

Many of the safety problems associated with super intelligent AI are so complex that they may require decades to solve. A super intelligent AI will, by definition, be very good at achieving its goals whatever they may be. As humans, well need to ensure that its goals are completely aligned with ours. The same holds for weaker artificial intelligence systems as the technology continues to evolve.

Intelligence enables control, and as technology becomes smarter, the greatest danger of artificial intelligence lies in its capacity to exceed human intelligence. Once that milestone is achieved, we run the danger of losing our control over the technology. And this danger becomes even more severe if the goals of that technology dont align with our own objectives.

A scenario whereby an AGI whose goals run counter to our own uses the internet to enforce the implementation of its internal directives illustrates why AI is dangerous in this respect. Such a system could potentially impact the financial markets, manipulate social and political discourse, or introduce technological innovations that we can barely imagine, much less keep up with.

The keys to determining why artificial intelligence is dangerous or not lie in its underlying programming, the method of its deployment, and whether or not its goals are in alignment with our own.

As technology continues its march toward artificial general intelligence, AI has the potential to become more intelligent than any human, and we currently have no way of predicting how it will behave. What we can do is everything in our power to ensure that the goals of that intelligence remain compatible with ours and the research and design to implement systems that keep them that way.

Summary:

Artificial intelligence encompasses a range of technologies and systems ranging from Googles search algorithms, through smart home gadgets, to military-grade autonomous weapons. So issuing a blanket confirmation or denial to the question Is Artificial Intelligence Dangerous? isnt that simple. For some AI developers, the Holy Grail is strong AI or artificial general intelligence (AGI), a level of technology at which machines would have a much greater degree of autonomy and versatility, enabling them to outperform humans in almost all cognitive tasks. While the super intelligence of strong AI has the potential to help us eradicate war, disease, and poverty, there are significant dangers of artificial intelligence at this level. The keys to determining why artificial intelligence is dangerous or not lie in its underlying programming, the method of its deployment, and whether or not its goals are in alignment with our own.

Go here to see the original:
Dangers & Risks of Artificial Intelligence - ITChronicles

Master in Artificial Intelligence Online | IU International

With the IU and LSBU (London South Bank University) dual degree track, you get a unique opportunity you can choose if you want to graduate with both a German and a British graduation certificate, without any extra academic requirements. The study programmes at IU and at LSBU are coordinated and therefore equivalent to each other.

Start your studies at IU, and if you want to apply for your British certificate* all you have to do is send in your application and pay the required fee. Youll then be awarded a degree from LSBU following your graduation if all of your study requirements have been fulfilled successfully.

Graduate with a German Bachelors, MBA or Masters degree along with a UK Bachelors with Honours (Hons), MBA or Masters.

London South Bank University is well-known for its impressive internationality, as testified by over 18,000 students from more than 130 countries. Similar to IU, LSBU has also been awarded multiple awards and praised for its focus on improving graduates career opportunities.

Our cooperation was born out of one goal: to help you get the best jobs in the world with a dual degree.

Get in touch with our Student Advisory Team, send in your application form and receive your British graduation certificate after youve successfully graduated from IU.

*only available for selected study programmes: B.Sc. Data Science, B.Sc. Computer Science, B.A.A. Business Administration, B.A. International Management, M.Sc. Artificial Intelligence, M.Sc. Computer Science, M.Sc. Data Science, M.A. Master Management with electives (Engineering, Finance & Accounting, Int. Marketing, IT, Leadership, Big Data), MBA with electives (Big Data, Engineering, Finance & Accounting, IT, Marketing).

Read this article:
Master in Artificial Intelligence Online | IU International

MS in Artificial Intelligence | University of Michigan-Dearborn

The Artificial Intelligencemaster's degree program is designed as a 30-credit hour curriculum that give students a comprehensive framework for artificial intelligence with one of 4 concentration areas: (1) Computer Vision, (2) Intelligent Interaction, (3) Machine Learning, and (4) Knowledge Management and Reasoning.

Students will engage in an extensive core intended curriculum to develop depth in all the core concepts that build a foundation for artificial intelligence theory and practice. Also, they will be given the opportunity to build on the core knowledge of AI by taking a variety of elective courses selected from colleges throughout campus to explore key contextual areas or more complex technical AI applications.

The program will be accessible to both full-time and part-time students, aiming to train students who aspire to have AI research and development (R&D) or leadership careers in industry. To accommodate the needs of working professionals who might be interested in this degree program, the course offerings for the MS in AI will be in the late afternoon and evening hours to allow students to earn the degree through part-time study. The program may be completed entirely on campus, entirely online, or through a combination of on-campus and online courses.

If you have additional questions, please contact the program director: Dr. Jin Lu (jinluz@umich.edu).

Read this article:
MS in Artificial Intelligence | University of Michigan-Dearborn

The truth about AI and ROI: Can artificial intelligence really deliver? – VentureBeat

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!

More than ever, organizations are putting their confidence and investment into the potential of artificial intelligence (AI) and machine learning (ML).

According to the 2022 IBM Global AI Adoption Index, 35% of companies report using AI today in their business, while an additional 42% say they are exploring AI. Meanwhile, a McKinsey survey found that 56% of respondents reported they had adopted AI in at least one function in 2021, up from 50% in 2020.

But can investments in AI deliver true ROI that directly impacts a companys bottom line?

According to Domino Data Labs recent REVelate survey, which surveyed attendees at New York Citys Rev3 conference in May, many respondents seem to think so. Nearly half, in fact, expect double-digit growth as a result of data science. And 4 in 5 respondents (79%) said that data science, ML and AI are critical to the overall future growth of their company, with 36% calling it the single most critical factor.

Implementing AI, of course, is no easy task. Other survey data shows another side of the confidence coin. For example, recent survey data by AI engineering firm CognitiveScale finds that, although execs know that data quality and deployment are critical success factors for successful app development to drive digital transformation, more than 76% arent sure how to get there in their target 12-18 month window. In addition, 32% of execs say that it has taken longer than expected to get an AI system into production.

ROI from AI is possible, but it must be accurately described and personified according to a business goal, Bob Picciano, CEO of Cognitive Scale, told VentureBeat.

If the business goal is to get more long-range prediction and increased prediction accuracy with historical data, thats where AI can come into play, he said. But AI has to be accountable to drive business effectiveness its not sufficient to say a ML model was 98% accurate.

Instead, the ROI could be, for example, that in order to improve call center effectiveness, AI-driven capabilities ensure that the average call handling time is reduced.

That kind of ROI is what they talk about in the C-suite, he explained. They dont talk about whether the model is accurate or robust or drifting.

Shay Sabhikhi, co-founder, and COO at Cognitive Scale, added that hes not surprised by the fact that 76% of respondents reported having trouble scaling their AI efforts. Thats exactly what were hearing from our enterprise clients, he said. One problem is friction between data science teams and the rest of the organization, he explained, that doesnt know what to do with the models that they develop.

Those models may have potentially the best algorithms and precision recall, but sit on the shelf because they literally get thrown over to the development team that then has to scramble, trying to assemble the application together, he said.

At this point, however, organizations have to be accountable for their investments in AI because AI is no longer a series of science experiments, Picciano pointed out. We call it going from the lab to life, he said. I was at a chief data analytics officer conference and they all said, how do I scale? How do I industrialize AI?

However, not everyone agrees that ROI is even the best way to measure whether AI drives value in the organization. According to Nicola Morini Bianzino, global chief technology officer, EY, thinking of artificial intelligence and the enterprise in terms of use cases that are then measured through ROI is the wrong way to go about AI.

To me, AI is a set of techniques that will be deployed pretty much everywhere across the enterprise there is not going to be an isolation of a use case with the associated ROI analysis, he said.

Instead, he explained, organizations simply have to use AI everywhere. Its almost like the cloud, where two or three years ago I had a lot of conversations with clients who asked, What is the ROI? Whats the business case for me to move to the cloud? Now, post-pandemic, that conversation doesnt happen anymore. Everybody just says, Ive got to do it.

Also, Bianzino pointed out, discussing AI and ROI depends on what you mean by using AI.

Lets say you are trying to apply some self-driving capabilities that is, computer vision as a branch of AI, he said. Is that a business case? No, because you cannot implement self-drivingwithout AI. The same is true for a company like EY, which ingests massive amounts of data and provides advice to clients which cant be done without AI. Its something that you cannot isolate away from the process its built into it, he said.

In addition, AI, by definition, is not productive or efficient on day one. It takes time to get the data, train the models, evolve the models and scale up the models. Its not like one day you can say, Im done with the AI and 100% of the value is right there no, this is an ongoing capability that gets better in time, he said. There is not really an end in terms of value that can be generated.

In a way, Bianzino said, AI is becoming part of the cost of doing business. If you are in a business that involves data analysis, you cannot not have AI capabilities, he explained. Can you isolate the business case of these models? It is very difficult and I dont think its necessary. To me, its almost like its a cost of the infrastructure to run your business.

Kjell Carlsson, head of data science strategy and evangelism at enterprise MLops provider Domino Data Lab says that at the end of the day, what organizations want is a measure of the business impact of ROI how much it contributed to the bottom line. But one problem is that this can be quite disconnected from how much work has gone into developing the model.

So if you create a model which improves click-through conversion by a percentage point, youve just added several million dollars to the bottom line of the organization, he said. But you could also have created a good predictive maintenance model which helped give advance warning to a piece of machinery needing maintenance before it happens. In that case, the dollar-value impact to the organization could be entirely different, even though one of them might end up being a much harder problem, he added.

Overall, organizations do need a balanced scorecard where they are tracking AI production. Because if youre not getting anything into production, then thats probably a sign that youve got an issue, he said. On the other hand, if you are getting too much into production, that can also be a sign that theres an issue.

For example, the more models data science teams deploy, the more models theyre on the hook for managing and maintaining, he explained. So you deployed this many models in the last year, so you cant actually undertake these other high-value ones that are coming your way, he explained.

But another issue in measuring the ROI of AI is that for a lot of data science projects, the outcome isnt a model that goes into production. If you want to do a quantitative win-loss analysis of deals in the last year, you might want to do a rigorous statistical investigation of that, he said. But theres no model that would go into production, youre using the AI for the insights you get along the way.

Still, organizations cant measure the role of AI if data science activities arent tracked. One of the problems right now is that so few data science activities are really being collected and analyzed, said Carlsson. If you ask folks, they say they dont really know how the model is performing, or how many projects they have, or how many CodeCommits your data scientists have made within the last week.

One reason for that is the very disconnected tools data scientists are required to use. This is one of the reasons why Git has become all the more popular as a repository, a single source of truth for your data scientist in an organization, he explained. MLops tools such as Domino Data Labs offer platforms that support these different tools. The degree to which organizations can create these more centralized platformsis important, he said.

Wallaroo CEO and founder Vid Jain spent close to a decade in the high-frequency trading business in Merrill Lynch, where his role, he said, was to deploy machine learning at scale and and do so with a positive ROI.

The challenge was not actually developing the data science, cleansing the data or building the trade repositories, now called data lakes. By far, the biggest challenge was taking those models, operationalizing them and delivering the business value, he said.

Delivering the ROI turns out to be very hard 90% of these AI initiatives dont generate their ROI, or they dont generate enough ROI to be worth the investment, he said. But this is top of mind for everybody. And the answer is not one thing.

A fundamental issue is that many assume that operationalizing machine learning is not much different than operationalizing a standard kind of application, he explained, adding that there is a big difference, because AI is not static.

Its almost like tending a farm, because the data is living, the data changes and youre not done, he said. Its not like you build a recommendation algorithm and then peoples behavior of how they buy is frozen in time. People change how they buy. All of a sudden, your competitor has a promotion. They stop buying from you. They go to the competitor. You have to constantly tend to it.

Ultimately, every organization needs to decide how they will align their culture to the end goal around implementing AI. Then you really have to empower the people to drive this transformation, and then make the people that are critical to your existing lines of business feel like theyre going to get some value out of the AI, he said.

Most companies are still early in that journey, he added. I dont think most companies are there yet, but Ive certainly seen over the last six to nine months that theres been a shift towards getting serious about the business outcome and the business value.

But the question of how to measure the ROI of AI remains elusive for many organizations. For some there are some basic things, like they cant even get their models into production, or they can but theyre flying blind, or they are successful but now they want to scale, Jain said. But as far as the ROI, there is often no P&L associated with machine learning.

Often, AI initiatives are part of a Center of Excellence and the ROI is grabbed by the business units, he explained, while in other cases its simply difficult to measure.

The problem is, is the AI part of the business? Or is it a utility? If youre a digital native, AI might be part of the fuel the business runs on, he said. But in a large organization that has legacy businesses or is pivoting, how to measure ROI is a fundamental question they have to wrestle with.

The rest is here:
The truth about AI and ROI: Can artificial intelligence really deliver? - VentureBeat

Artificial intelligence has reached a threshold. And physics can help it break new ground – Interesting Engineering

For years, physicists have been making major advances and breakthroughs in the field using their minds as their primary tools. But what if artificial intelligence could help with these discoveries?

Last month, researchers at Duke University demonstrated that incorporating known physics into machine learning algorithms could result in new levels of discoveries into material properties, according to a press release by the institution. They undertook a first-of-its-kind project where theyconstructed a machine-learning algorithm to deduce the properties of a class of engineered materials known as metamaterials and to determine how they interact with electromagnetic fields.

The results proved extraordinary. The new algorithm accurately predicted the metamaterials properties more efficiently than previous methods while also providing new insights.

By incorporating known physics directly into the machine learning, the algorithm can find solutions with less training data and in less time, said Willie Padilla, professor of electrical and computer engineering at Duke. While this study was mainly a demonstration showing that the approach could recreate known solutions, it also revealed some insights into the inner workings of non-metallic metamaterials that nobody knew before.

In their new work, the researchers focused on making discoveries that were accurate and made sense.

Neural networks try to find patterns in the data, but sometimes the patterns they find dont obey the laws of physics, making the model it creates unreliable, said Jordan Malof, assistant research professor of electrical and computer engineering at Duke. By forcing the neural network to obey the laws of physics, we prevented it from finding relationships that may fit the data but arent actually true.

They did that by imposing upon the neural network a physics called a Lorentz model. This is a set of equations that describe how the intrinsic properties of a material resonate with an electromagnetic field. This, however, was no easy feat to achieve.

When you make a neural network more interpretable, which is in some sense what weve done here, it can be more challenging to fine tune, said Omar Khatib, a postdoctoral researcher working in Padillas laboratory. We definitely had a difficult time optimizing the training to learn the patterns.

The researchers were pleasantly surprised to find that this model workedmore efficiently than previous neural networks the group had created for the same tasks by dramatically reducing the number of parameters needed for the model to determine the metamaterial properties. The new model could evenmake discoveries all on its own.

Now, the researchers are getting ready to use their approach on unchartered territory.

Now that weve demonstrated that this can be done, we want to apply this approach to systems where the physics is unknown, Padilla said.

Lots of people are using neural networks to predict material properties, but getting enough training data from simulations is a giant pain, Malof added. This work also shows a path toward creating models that dont need as much data, which is useful across the board.

The study is published in the journal Advanced Optical Materials.

Read this article:
Artificial intelligence has reached a threshold. And physics can help it break new ground - Interesting Engineering