Archive for the ‘Alphago’ Category

How Could AI be used in the Online Casino Industry – Rebellion Research

How Could AI be used in the Online Casino Industry

Online casinos have seen a rapid boom over the last couple of years. There are plenty of reasons why, starting with the internet.

The average player can start a game within minutes, no matter where they are.

They dont have to visit a casino hall, nor do they have to deal with spotty internet.

The online casino experience has also drastically improved. Operators did this by adding the latest features and technologies as soon as possible.

In the beginning, games began as software that you could download onto your computers. Now, they are online websites connected to a server, offering a more streamlined and smoother experience.

From slots to table games and game shows, players have a wider variety to choose from too. And all these online casino games are very easy to find across a wide range of casino operators listed on Gambling.com.

Players can also choose from different ways to play. Will it be against bots or against real players?

As casinos continue to innovate, AI has been an aspect many have been slowly exploring. It is certainly an exciting prospect, and it brings plenty of ways to improve the current gaming experience.

Casino operators stand to benefit from it but users too will see plenty of benefits. This applies to both traditional gamers and modern gamers.

Keep reading on how AI will help.

With automated AI for customer service, casino operators and users both enjoy a vastly improved service.

For instance, casino operators will see lower overhead costs.

Meanwhile, users enjoy instant, around the clock service. AI can also analyze billing history and other preferences to deliver more targeted responses.

AI in customer support has come far and today offers an increasingly human-like response. Of course, humans will still be available in the background to handle more complex issues.

Gambling addiction is a serious issue and its dangers have been well documented in the last few decades.

Signs of addiction include borrowing money to gamble and not knowing when to draw the line. Chasing losses and lying are a few other red flags.

But AI has the ability to create more responsible gaming experiences.

How you ask?

Well, it can analyze the activities of a player to identify irresponsible patterns. With this input, casinos can create a safety net that keeps their players safe and protected.

The deeper analytical skills of AI can help detect cheaters very early on. This can ensure a level gaming field and safer gambling for everyone involved.

AI can flag suspicious behaviour and interpret any patterns. This will help operators catch cheaters faster before they can harm other players.

This is especially useful when it comes to online casinos. At a casino hall, there are cameras and bouncers to monitor games whereas online casinos dont have as many eyes.

These are but a few of the benefits everyone stands to gain with AI. Its implementation is still in the early stages but there are positive signs.

For example, AI use in video games has been rapidly increasing and casinos are known to heavily borrow from games. So we can expect that to happen with AI too.

Now AI usage has been around in video games for a while but not the self-learning kind of AI. This is the AI that is found in language processing, computer vision and self-driving cars.

Interestingly, self-learning AI was developed by software that improved upon itself through playing video games. Some examples include OpenAIs Dota 2 bot and DeepMinds AlphaGo program.

It was only in the last decade or so that game developers got access to these advanced tools. Through this, they were able to create more intelligent and immersive games that use sophisticated AI.

Games that are able to read the player and respond to their moves. Game NPCs that were able to change and evolve through the players playthrough.

With this, in the future, we can expect improvements like customized experiences in casino games. Through AI, casino operators can shape and adapt in-game experiences depending on how the player responds.

This goes beyond just the in-game experience. AI can detect the preferred game mode and customize the homepage with curated selections that a player will like.

It can track trends, figure out patterns and predict future actions with accuracy. Casino operators can use this data to fine-tune their games and deliver better experiences.

Moreover, AI can help casinos analyze the data collected to deliver more personalized ads. Special offers and experiences can be promoted to specific players.

These are processes that would take way too long with humans. But with AI, these could be analyzed within minutes.

What began with 16-bit graphics is now a real-time, realistic gaming experience. People can play with a human dealer and human players with hardly any hassle.

Aside from a realistic experience, the integration of AI will vastly help in online casino development. Whether it is tweaking the UI or the website, casino operators can use AI to discover the optimal settings.

And as a player, responsible gaming and a fair experience will be possible in all games. From online roulette to online blackjack, AI promises to revolutionize how you game.

How Could AI be used in the Online Casino Industry

Continued here:
How Could AI be used in the Online Casino Industry - Rebellion Research

5 Times Artificial Intelligence Have Busted World Champions – Analytics Insight

Artificial intelligence has beaten eight world champions at the bridge

Even though people are not ready to accept the fact, there will be a point in the future when Artificial Intelligence (AI) will take over the human race when it comes to various jobs. Some analysts estimate that over 50% of jobs in the world will be lost to Artificial Intelligence in forthcoming years. Jobs like accounting, human resources, management, and more will be obsolete some time in the future, thanks to Artificial intelligence. However, you dont have to wait till 2030 to find out how technologies like Artificial intelligence, Robotics, Machine Learning, and more will overtake mankind as its already beating humans in the most strategic and complicated games like Chess and Go that have been used as a standard to determine intelligence or IQ levels. This article features five instances of Artificial intelligence busting world champions.

The victory represents a new milestone for Artificial Intelligence because, in the bridge, players work with incomplete information and must react to the behavior of several other players a scenario far closer to human decision-making.

Share This ArticleDo the sharing thingy

About AuthorMore info about author

View original post here:
5 Times Artificial Intelligence Have Busted World Champions - Analytics Insight

The Guardian view on bridging human and machine learning: its all in the game – The Guardian

Last week an artificial intelligence called NooK beat eight world champion players at bridge. That algorithms can outwit humans might not seem newsworthy. IBMs Deep Blue beat world chess champion Garry Kasparov in 1997. In 2016, Googles AlphaGo defeated a Go grandmaster. A year later the AI Libratus saw off four poker stars. Yet the real-world applications of such technologies have been limited. Stephen Muggleton, a computer scientist, suggests this is because they are black boxes that can learn better than people but cannot express, and communicate, that learning.

NooK, from French startup NukkAI, is different. It won by formulating rules, not just brute-force calculation. Bridge is not the same as chess or Go, which are two-player games based on an entirely known set of facts. Bridge is a game for four players split into two teams, involving collaboration and competition with incomplete information. Each player sees only their cards and needs to gather information about the other players hands. Unlike poker, which also involves hidden information and bluffing, in bridge a player must disclose to their opponents the information they are passing to their partner.

This feature of bridge meant NooK could explain how its playing decisions were made, and why it represents a leap forward for AI. When confronted with a new game, humans tend to learn the rules and then learn to improve by, for example, reading books. By contrast, black box AIs train themselves by deep learning: playing a game billions of times until the algorithm has worked out how to win. It is a mystery how this software comes to its conclusions or how it will fail.

NooK nods to the work of British AI pioneer Donald Michie, who reasoned that AIs highest state would be to develop new insights and teach these to humans, whose performance would be consequently increased to a level beyond that of a human studying by themselves. Michie considered weak machine learning to be just improving AI performance by increasing the amount of data ingested.

His insight has been vindicated as deep learnings limits have been exposed. Self-driving cars remain a distant dream. Radiologists were not replaced by AI last year, as had been predicted. Humans, unlike computers, often make short work of complicated, high-stake tasks. Thankfully, human society is not under constant diagnostic surveillance. But this often means not enough data for AI is available, and frequently it contains hidden, socially unacceptable biases. The environmental impact is also a growing concern, with computing projected to account for 20% of global electricity demand by 2030.

Technologies build trust if they are understandable. Theres always a danger that black box AI solves a problem in the wrong way. And the more powerful a deep-learning system becomes, the more opaque it can become. The House of Lords justice committee this week said such technologies have serious implications for human rights and warned against convictions and imprisonment on the basis of AI that could not be understood or challenged. NooK will be a world-changing technology if it lives up to the promise of solving complex problems and explaining how it does so.

Read the rest here:
The Guardian view on bridging human and machine learning: its all in the game - The Guardian

How to Strengthen America’s Artificial Intelligence Innovation – The National Interest

Rapidly developing artificial intelligence (AI) technology is becoming increasingly critical for innovation and economic growth. To secure American leadership and competitiveness in this emerging field, policymakers should create an innovation-friendly environment for AI research. To do so, federal authorities should identify ways to engage the private sector and research institutions.

The National AI Research and Development (R&D) Strategic Plan, which will soon be updated by the Office of Science and Technology Policy (OSTP) and the National Science and Technology Council (NSTC), presents such an opportunity. However, the AI Strategic Plan needs several updates to allow the private sector and academic institutions to become more involved in developing AI technologies.

First, the OSTP should propose the creation of a federal AI regulatory sandbox to allow companies and research institutions to test innovative AI systems for a limited time. An AI sandbox would not only benefit consumers and participating companies; it would also enable regulators to gain first-hand insights into emerging AI systems and help craft market-friendly regulatory frameworks and technical standards. Regulators could also create sandbox programs to target innovation on specific issuessuch as human-machine interaction and probabilistic reasoningthat the AI Strategic Plan identifies as priority areas in need of further research.

Second, the updated AI strategy should outline concrete steps to publish high-quality data sets using the vast amount of non-sensitive and non-personally identifiable data that the federal government possesses. AI developers need high-quality data sets on which AI systems can be trained, but the lack of access to these data sets remains a significant challenge for developing novel AI technologies, especially for startups and businesses without the resources of big tech companies. The costs associated with creating, cleaning, and preparing such data sets are too high for many businesses and academic institutions. For example, AlphaGo, a software produced by Google subsidiary DeepMind, made headlines in March 2016 when it defeated the human champion of a Chinese strategy game. More than $25 million was spent on hardware alone to train data sets for this program.

Recognizing this challenge, the AI Strategic Plan recommended the development of shared public data sets, but progress in this area appears to be slow. Under the 1974 Privacy Act, the U.S. government has not created a central data repository, which is important due to the privacy and cybersecurity risks that such a repository of sensitive information would pose. However, different U.S. agencies have created a wide range of non-personally identifiable and non-sensitive data sets intended for public use. Two notable examples are the National Oceanic and Atmospheric Administrations climate data and NASAs non-confidential space-related data. Making such data readily available to the public can promote AI innovation in weather forecasting, transportation, astronomy, and other underexplored subjects.

Therefore, the AI strategy should propose a framework that enables the OSTP and the NSTC to work with government agencies in order to ensure that non-sensitive and non-personally identifiable dataintended for public useare made available in a format suitable for AI research by the private sector and research institutions. To that end, the OSTP and the NSTC could use the federal governments existing FedRAMP classification of different data types to decide which data should be included in such data sets.

Finally, the AI Strategic Plan would benefit from a closer examination of other countries AI R&D strategies. While policymakers should exercise caution in making international comparisons, awareness of these broader trends can help the United States capitalize on different countries successes and avoid their regulatory mistakes. For example, the British and French governments recently spearheaded initiatives to promote high-level interdisciplinary AI research in multiple disciplines. Likewise, the Chinese government has launched similar initiatives to encourage cross-disciplinary academic research at the intersection of artificial intelligence, economics, psychology, and other disciplines. Studying and evaluating other countries approaches could provide American policymakers insights into which existing R&D resources should be devoted to interdisciplinary AI projects.

To maximize the benefit of this comparative approach, the AI Strategic Plan should propose mechanisms to conduct annual reviews of the global AI research and regulatory landscape andevaluations of its successes and failures.

Ultimately, due to AIs general-purpose nature and its diffusion across the economy, the AI Strategic Plan should focus on enabling a wide range of actors, from startups to academic and financial institutions, to play a role in strengthening American AI innovation. An innovation-friendly research environment and an adaptable, light-touch regulatory approach are vital to secure Americas global economic competitiveness and technological innovation in artificial intelligence.

Ryan Nabil is a Research Fellow at the Competitive Enterprise Institute in Washington, DC.

Image: Flickr/U.S. Air Force.

Original post:
How to Strengthen America's Artificial Intelligence Innovation - The National Interest

Why it’s time to address the ethical dilemmas of artificial intelligence – Economic Times

The Future of Life Institute (FLI) was founded in March 2014 by eminent futurologists and researchers to reduce catastrophic and existential risks to humankind from advanced technologies like artificial intelligence (AI). Elon Musk, who is on FLI's advisory board, donated $10 million to jump-start research on AI safety because, in his words, 'with artificial intelligence, we are summoning the devil'. For something that everyone is singing hosannas to these days, and treating as a solution to almost all challenges faced by industry or healthcare or education, why this cautionary tale?

AI's perceived risk isn't only from autonomous weapon systems that countries like the US, China, Israel and Turkey produce that can track and target humans and assets without human intervention. It's equally about the deployment of AI and such technologies for mass surveillance, adverse health interventions, contentious arrests and the infringement of fundamental rights. Not to mention about the vulnerabilities that dominant governments and businesses can insidiously create.

AI came into global focus in 1997 when IBM's Deep Blue beat world chess champion Garry Kasparov. We came to accept that the outcome was inevitable, considering it was a game based on logic. And that the ability of the computer to reference past games, figure options and select the most effective move instantly, is superior to what humans could ever do. When Google DeepMind's AlphaGo program bested the world's best Go player Lee Sedol in 2016, we learnt that AI could easily master games based on intuition too.

AI, AI, SirAs the United Nations Educational, Scientific and Cultural Organisation (Unesco) sharpened the focus in recognising the ethical dilemmas that AI could create, it has embarked on developing a legal, global document on the subject. Situations discussed include how a search engine can become an echo chamber upholding real-life biases and prejudices - like when we search for the 'greatest leaders of all time', and get a list of only male personalities. Or the quandary when a car brakes to avoid a jaywalker and shifts the risk from the pedestrian to the travellers in the car. Or when AI is exploited to study 346 Rembrandt paintings pixel by pixel, leveraging deep-learning algorithms to produce a magnificent, 3D-printed masterpiece that could deceive the best art experts and connoisseurs.

Then there is the AI-aided application of justice in legislation, administration, adjudication and arbitration. Unesco's quest to provide an ethical framework to ensure emerging technologies benefit humanity at large is, indeed, a noble one.

Interestingly, computer scientists at the Vienna University of Technology (TU Wein), Austria, are studying Indian Vedic texts, and applying them to mathematical logic. The idea is to develop reasoning tools to address deontic - relating to duty and obligation - concepts like prohibitions and commitments, to implement ethics in AI.

Logicians at the Institute of Logic and Computation at TU Wein and the Austrian Academy of Science are also gleaning the Mimamsa, which interprets the Vedas and suggests how to maintain harmony in the world, to resolve many innate contradictions. Essentially, as classical logic is less useful when dealing with ethics, deontic logic needs to be developed that can be expressed in mathematical formulae, creating a framework that computers can comprehend and respond to.

Isaac Asimov's iconic 1950 book, I, Robot, sets out the three rules all robots must be programmed with: the Three Laws of Robotics - 1. To never harm a human or allow a human to come to harm. 2. To obey humans unless this violates the first law. 3. To protect its own existence unless this violates the first or second laws. In the 2004 film adaptation, a larger threat is envisaged - when AI-enabled robots rebel and try to enslave and control all humans, to protect humanity for its own good, by their dialectic.

Artificially RealIn the real world, there is little doubt that AI has to be mobilised for the greater good, guided by the right human intention, so that it can be leveraged to control larger forces of nature like climate change and natural disasters that we can't otherwise manage. AI must be a means to nourish humanity in multifarious ways, rather than unobtrusively aid its destruction. It is obvious that the Three Laws of Robotics must be augmented, so that expanded algorithms help the AI engine respect privacy, and not discriminate in terms of race, gender, age, colour, wealth, religion, power or politics.

We're seeing the mainstreaming of AI in an age of exponential digital transformation. How we figure its future will shape the next stage of human evolution. The time is opportune for governments to confabulate - to shape equitable outcomes, a risk management strategy and pre-emptive contingency plans.

More here:
Why it's time to address the ethical dilemmas of artificial intelligence - Economic Times