Archive for the ‘Artificial Intelligence’ Category

Intelligent gaming: How Artificial Intelligence and machine learning raise the stakes – The Times of India Blog

From using computers to work faster to now teaching computers to work themselves, weve achieved quantum leaps when it comes to the possibilities of computer technology. Driving this new generation forward are the developments weve made with Artificial Intelligence and Machine Learning.

Algorithms dictate processes, solve problems, perform calculations, and more. What Artificial Intelligence does is cut out the middle-man us for the most part with the help of Machine Learning. To simplify, Machine Learning creates algorithms that AI systems can use to process data and learn new things without having to be programmed. Detailed algorithms are now capable of learning on the fly and adapting to situations using big data and deep learning at almost instantaneous speeds. For instance, AI systems with Machine Learning capabilities can recognize faces, detect instances of fraud, predict customer behaviour, and much more.

The past decade has seen the most proliferation of this technology, with everything from social media platforms, OTT content providers to even matrimonial services employing some form of this technology. So, its only natural that online gaming, a multi-billion-dollar industry, had its interest piqued by its possibilities and the possibilities are vast!

How are Machine Learning and Artificial Intelligence improving gaming (and vice versa)?

Gaming and Artificial Intelligence is a match made in technology heaven. AI&ML helps with improving the in-game and product experience for the players by powering a more personalized experience. It also has applications in content marketing by helping create user journeys with increased efficiency, leading to a landscape where users have instant and reliable access to what they want, cutting through the clutter.

Advanced analytics can help the player analyse their gameplay, reflect on it, and develop new strategies. Real-time probability analysis helps the player calculate the odds of a win. The introduction of metrics like VPIP in online poker, which conveys the frequency of a player participating in a hand, has given gamers a quantitative and qualitative overview of their own as well as their opponents performances. Online gaming thus becomes more than just a click of a button and starts to feel just as real, if not more.

This is even more so true when you look at the back-end benefits of employing AI&ML. Fraud is heavily mitigated, thanks to technology capable of detecting anomalies in real-time. Customer experience is vastly improved as AI helps online games reach the right gamers far more easily. Player protection becomes much more rapid, allowing online platforms to help users play responsibly. Harnessing this technology can truly make online gaming a far safer experience for the player.

Another benefit is the cost-efficiency of the technology. As the technology becomes more prevalent, we see an improvement in several aspects of a gamers online journey. Things like online payments, security, and customer support become far more efficient and will steadily undergo transformation and improvements over the next few years. Chatbots are one such example where the user experience can be elevated by providing faster and efficient support to customers.

On the flip side, this surge in adopting this technology now witnesses rapid developments for the technology as well. In an industry where decisions are made in milliseconds, we need technology that can work in nanoseconds. So, we now see huge resources being dedicated to enabling just that. Industry giants sense the underlying possibilities and have specialized departments to focus on the improvement of these Artificial Intelligence algorithms. It wouldnt be surprising to see generational leaps in the capabilities of AI as a whole. While they say its unnecessary to reinvent the wheel, as gamers, were inclined to try and make the wheel go faster!

Views expressed above are the author's own.

END OF ARTICLE

Continued here:
Intelligent gaming: How Artificial Intelligence and machine learning raise the stakes - The Times of India Blog

CDAC, IITs to jointly offer online course on artificial intelligence – The Indian Express

Students with basic knowledge of machine learning can apply for an online course on applied artificial intelligence (AI) offered by select Indian Institutes of Technology (IITs).

The course will teach ways to implement AI in industrial use and domains like healthcare, applications in smart city projects and so on.

The course, which includes demonstrations and code walkthroughs and industrial use-cases, is part of the ongoing National Supercomputing Mission (NSM). This six-year-old mission is jointly being led by the Centre for Development of Advanced Computing (CDAC) and Indian Institute of Science under the aegis of the department of science and technology and the electronics and IT ministry.

The online course, to be jointly conducted by IITs Kharagpur, Madras, Palakkad and Goa will cover topics like fundamentals of AI accelerators and system setup, accelerated deep learning, end-to-end accelerated deep science and industrial use-cases of accelerated AI.

For registrations and further details, applicants can visit iitgoa.ac.in/aishikshaai/schedule.php

The 33-session long course will commence on January 31 and is best suited for students in their third and fourth years of engineering from any stream, science postgraduates, PhD scholars and working professionals.

Read the original:
CDAC, IITs to jointly offer online course on artificial intelligence - The Indian Express

Artificial Intelligence Used To Search for the Next SARS-COV-2 – SciTechDaily

Rhinolophus rouxi, which inhabits parts of South Asia, was identified as a likely but undetected betacoronavirus host by the study authors. Credit: Brock and Sherri Fenton

Daniel Becker, an assistant professor of biology in the University of Oklahomas Dodge Family College of Arts and Sciences, has been leading a proactive modeling study over the last year and a half to identify bat species that are likely to carry betacoronaviruses, including but not limited to SARS-like viruses.

The study Optimizing predictive models to prioritize viral discovery in zoonotic reservoirs, which was published by Lancet Microbe, was guided by Becker; Greg Albery, a postdoctoral fellow at Georgetown Universitys Bansal Lab; and Colin J. Carlson, an assistant research professor at Georgetowns Center for Global Health Science and Security.

It also included collaborators from the University of Idaho, Louisiana State University, University of California Berkeley, Colorado State University, Pacific Lutheran University, Icahn School of Medicine at Mount Sinai, University of Glasgow, Universit de Montral, University of Toronto, Ghent University, University College Dublin, Cary Institute of Ecosystem Studies, and the American Museum of Natural History.

Becker and colleagues study is part of the broader efforts of an international research team called the Verena Consortium (viralemergence.org), which works to predict which viruses could infect humans, which animals host them, and where they could emerge. Albery and Carlson were co-founders of the consortium in 2020, with Becker as a founding member.

Despite global investments in disease surveillance, it remains difficult to identify and monitor wildlife reservoirs of viruses that could someday infect humans. Statistical models are increasingly being used to prioritize which wildlife species to sample in the field, but the predictions being generated from any one model can be highly uncertain. Scientists also rarely track the success or failure of their predictions after they make them, making it hard to learn and make better models in the future. Together, these limitations mean that there is high uncertainty in which models may be best suited to the task.

In this study, researchers used bat hosts of betacoronaviruses, a large group of viruses that includes those responsible for SARS and COVID-19, as a case study for how to dynamically use data to compare and validate these predictive models of likely reservoir hosts. The study is the first to prove that machine learning models can optimize wildlife sampling for undiscovered viruses and illustrates how these models are best implemented through a dynamic process of prediction, data collection, validation and updating.

In the first quarter of 2020, researchers trained eight different statistical models that predicted which kinds of animals could host betacoronaviruses. Over more than a year, the team then tracked discovery of 40 new bat hosts of betacoronaviruses to validate initial predictions and dynamically update their models. The researchers found that models harnessing data on bat ecology and evolution performed extremely well at predicting new hosts of betacoronaviruses. In contrast, cutting-edge models from network science that used high-level mathematics but less biological data performed roughly as well or worse than expected at random.

Importantly, their revised models predicted over 400 bat species globally that could be undetected hosts of betacoronaviruses, including not only in southeast Asia but also in sub-Saharan Africa and the Western Hemisphere. Although 21 species of horseshoe bats (in the Rhinolophusgenus) are known to be hosts of SARS-like viruses, researchers found at least two-fourths of plausible betacoronavirus reservoirs in this bat genus might still be undetected.

One of the most important things our study gives us is a data-driven shortlist of which bat species should be studied further, said Becker, who adds that his team is now working with field biologists and museums to put their predictions to use. After identifying these likely hosts, the next step is then to invest in monitoring to understand where and when betacoronaviruses are likely to spill over.

Becker added that although the origins of SARS-CoV-2 remain uncertain, the spillover of other viruses from bats has been triggered by forms of habitat disturbance, such as agriculture or urbanization.

Bats conservation is therefore an important part of public health, and our study shows that learning more about the ecology of these animals can help us better predict future spillover events, he said.

For more on this research, see Shall We Play a Game? Researchers Use AI To Search for the Next COVID/SARS-Like Virus.

Reference: Optimising predictive models to prioritise viral discovery in zoonotic reservoirs by Daniel J Becker, PhD; Gregory F Albery, PhD; Anna R Sjodin, PhD; Timothe Poisot, PhD; Laura M Bergner, PhD; Binqi Chen; Lily E Cohen, MPhil; Tad A Dallas, PhD; Evan A Eskew, PhD; Anna C Fagre, DVM; Maxwell J Farrell, PhD; Sarah Guth, BA; Barbara A Han, PhD; Nancy B Simmons, PhD; Michiel Stock, PhD; Emma C Teeling, PhD and Colin J Carlson, PhD, 10 January 2022, The Lancet Microbe.DOI: 10.1016/S2666-5247(21)00245-7

Visit link:
Artificial Intelligence Used To Search for the Next SARS-COV-2 - SciTechDaily

Artificial Intelligence (AI) – United States Department of …

A global technology revolution is now underway. The worlds leading powers are racing to develop and deploy new technologies like artificial intelligence and quantum computing that could shape everything about our lives from whereweget energy, to how we do our jobs, to how wars are fought. We want America to maintain our scientific and technological edge, because its critical to us thriving in the 21st century economy.

Investments in AI have led to transformative advances now impacting our everyday lives, including mapping technologies, voice-assisted smart phones, handwriting recognition for mail delivery, financial trading, smart logistics, spam filtering, language translation, and more. AI advances are also providing great benefits to our social wellbeing in areas such as precision medicine, environmental sustainability, education, and public welfare.

The term artificial intelligence means a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments.

The Department of State focuses on AI because it is at the center of the global technological revolution; advances in AI technology present both great opportunities and challenges. The United States, along with our partners and allies, can both further our scientific and technological capabilities and promote democracy and human rights by working together to identify and seize the opportunities while meeting the challenges by promoting shared norms and agreements on the responsible use of AI.

Together with our allies and partners, the Department of State promotes an international policy environment and works to build partnerships that further our capabilities in AI technologies, protect our national and economic security, and promote our values. Accordingly, the Department engages in various bilateral and multilateral discussions to support responsible development, deployment, use, and governance of trustworthy AI technologies.

The Department provides policy guidance to implement trustworthy AI through theOrganization for Economic Cooperation and Development (OECD)AI Policy Observatory, a platform established in February 2020 to facilitate dialogue between stakeholders and provide evidence-based policy analysis in the areas where AI has the most impact.The State Department provides leadership and support to the OECD Network of Experts on AI (ONE AI), which informs this analysis.The United States has 47 AI initiatives associated with the Observatory that help contribute to COVID-19 response, invest in workforce training, promote safety guidance for automated transportation technologies, andmore.

The OECDs Recommendation on Artificial Intelligence is the backbone of the activities at the Global Partnership on Artificial Intelligence (GPAI) and the OECD AI Policy Observatory. In May 2019, the United States joined together with likeminded democracies of the world in adopting the OECD Recommendation on Artificial Intelligence, the first set of intergovernmental principles for trustworthy AI. The principles promote inclusive growth, human-centered values, transparency, safety and security, and accountability. The Recommendation also encourages national policies and international cooperation to invest in research and development and support the broader digital ecosystem for AI. The Department of State champions the principles as the benchmark for trustworthy AI, which helps governments design national legislation.

GPAI is a voluntary, multi-stakeholder initiative launched in June 2020 for the advancement of AI in a manner consistent with democratic values and human rights. GPAIs mandate is focused on project-oriented collaboration, which it supports through working groups looking at responsible AI, data governance, the future of work, and commercialization and innovation. As a founding member, the United States has played a critical role in guiding GPAI and ensuring it complements the work of the OECD.

In the context of military operations in armed conflict, the United States believes that international humanitarian law (IHL) provides a robust and appropriate framework for the regulation of all weapons, including those using autonomous functions provided by technologies such as AI. Building a better common understanding of the potential risks and benefits that are presented by weapons with autonomous functions, in particular their potential to strengthen compliance with IHL and mitigate risk of harm to civilians, should be the focus of international discussion. The United States supports the progress in this area made by the Convention on Certain Conventional Weapons, Group of Governmental Experts on Emerging Technologies in the Area of Lethal Autonomous Weapon Systems (GGE on LAWS), which adopted by consensus 11 Guiding Principles on responsible development and use of LAWS in 2019. The State Department will continue to work with our colleagues at the Department of Defense to engage the international community within the LAWS GGE.

Learnmore about what specific bureaus and offices are doing to support this policy issue:

TheGlobal Engagement Centerhas developed a dedicated effort for the U.S. Government to identify, assess, test and implement technologies against the problems of foreign propaganda and disinformation, in cooperation with foreign partners, private industry and academia.

The Office of the Under Secretary for Managementuses AI technologies within the Department of State to advance traditional diplomatic activities,applying machine learning to internal information technology and management consultant functions.

TheOffice of the Under Secretary of State for Economic Growth, Energy, and the Environmentengages internationally to support the U.S. science and technology (S&T) enterprise through global AI research and development (R&D) partnerships, setting fair rules of the road for economic competition, advocating for U.S. companies, and enabling foreign policy and regulatory environments that benefit U.S. capabilities in AI.

TheOffice of the Under Secretary of State for Arms Control and International Securityfocuses on the security implications of AI, including potential applications in weapon systems, its impact on U.S. military interoperability with its allies and partners,its impact on stability,and export controls related to AI.

TheOffice of the Under Secretary for Civilian Security, Democracy, and Human Rightsand its component bureaus and offices focus on issues related to AI and governance, human rights, including religious freedom, and law enforcement and crime, among others.

TheOffice of the Legal Adviserleads on issues relating to AI in weapon systems (LAWS), in particular at the Group of Governmental Experts on Lethal Autonomous Weapons Systems convened under the auspices of the Convention on Certain Conventional Weapons.

For more information on federalprograms and policyon artificial intelligence, visitai.gov.

Read the original here:
Artificial Intelligence (AI) - United States Department of ...

What is Artificial Intelligence (AI)? – India | IBM

Artificial intelligence leverages computers and machines to mimic the problem-solving and decision-making capabilities of the human mind.

While a number of definitions of artificial intelligence (AI) have surfaced over the last few decades, John McCarthy offers the following definition in this 2004 paper(PDF, 106 KB) (link resides outside IBM), " It is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable."

However, decades before this definition, the birth of the artificial intelligence conversation was denoted by Alan Turing's seminal work, "Computing Machinery and Intelligence" (PDF, 89.8 KB)(link resides outside of IBM), which was published in 1950. In this paper, Turing, often referred to as the "father of computer science", asks the following question, "Can machines think?" From there, he offers a test, now famously known as the "Turing Test", where a human interrogator would try to distinguish between a computer and human text response. While this test has undergone much scrutiny since its publish, it remains an important part of the history of AI as well as an ongoing concept within philosophy as it utilizes ideas around linguistics.

Stuart Russell and Peter Norvig then proceeded to publish, Artificial Intelligence: A Modern Approach(link resides outside IBM), becoming one of the leading textbooks in the study of AI. In it, they delve into four potential goals or definitions of AI, which differentiates computer systems on the basis of rationality and thinking vs. acting:

Human approach:

Ideal approach:

Alan Turings definition would have fallen under the category of systems that act like humans.

At its simplest form, artificial intelligence is a field, which combines computer science and robust datasets, to enable problem-solving. It also encompasses sub-fields of machine learning and deep learning, which are frequently mentioned in conjunction with artificial intelligence. These disciplines are comprised of AI algorithms which seek to create expert systems which make predictions or classifications based on input data.

Today, a lot of hype still surrounds AI development, which is expected of any new emerging technology in the market. As noted in Gartners hype cycle (link resides outside IBM), product innovations like, self-driving cars and personal assistants, follow a typical progression of innovation, from overenthusiasm through a period of disillusionment to an eventual understanding of the innovations relevance and role in a market or domain. As Lex Fridman notes here (link resides outside IBM) in his MIT lecture in 2019, we are at the peak of inflated expectations, approaching the trough of disillusionment.

As conversations emerge around the ethics of AI, we can begin to see the initial glimpses of the trough of disillusionment. To read more on where IBM stands within the conversation around AI ethics, read more here.

Weak AIalso called Narrow AI or Artificial Narrow Intelligence (ANI)is AI trained and focused to perform specific tasks. Weak AI drives most of the AI that surrounds us today. Narrow might be a more accurate descriptor for this type of AI as it is anything but weak; it enables some very robust applications, such as Apple's Siri, Amazon's Alexa, IBM Watson, and autonomous vehicles.

Strong AI is made up of Artificial General Intelligence (AGI) and Artificial Super Intelligence (ASI). Artificial general intelligence (AGI), or general AI, is a theoretical form of AI where a machine would have an intelligence equaled to humans; it would have a self-aware consciousness that has the ability to solve problems, learn, and plan for the future. Artificial Super Intelligence (ASI)also known as superintelligencewould surpass the intelligence and ability of the human brain. While strong AI is still entirely theoretical with no practical examples in use today, that doesn't mean AI researchers aren't also exploring its development. In the meantime, the best examples of ASI might be from science fiction, such as HAL, the superhuman, rogue computer assistant in 2001: A Space Odyssey.

Since deep learning and machine learning tend to be used interchangeably, its worth noting the nuances between the two. As mentioned above, both deep learning and machine learning are sub-fields of artificial intelligence, and deep learning is actually a sub-field of machine learning.

Deep learning is actually comprised of neural networks. Deep in deep learning refers to a neural network comprised of more than three layerswhich would be inclusive of the inputs and the outputcan be considered a deep learning algorithm. This is generally represented using the following diagram:

The way in which deep learning and machine learning differ is in how each algorithm learns. Deep learning automates much of the feature extraction piece of the process, eliminating some of the manual human intervention required and enabling the use of larger data sets. You can think of deep learning as "scalable machine learning" as Lex Fridman noted in same MIT lecture from above. Classical, or "non-deep", machine learning is more dependent on human intervention to learn. Human experts determine the hierarchy of features to understand the differences between data inputs, usually requiring more structured data to learn.

"Deep" machine learning can leverage labeled datasets, also known as supervised learning, to inform its algorithm, but it doesnt necessarily require a labeled dataset. It can ingest unstructured data in its raw form (e.g. text, images), and it can automatically determine the hierarchy of features which distinguish different categories of data from one another. Unlike machine learning, it doesn't require human intervention to process data, allowing us to scale machine learning in more interesting ways.

There are numerous, real-world applications of AI systems today. Below are some of the most common examples:

The idea of 'a machine that thinks' dates back to ancient Greece. But since the advent of electronic computing (and relative to some of the topics discussed in this article) important events and milestones in the evolution of artificial intelligence include the following:

IBM has been a leader in advancing AI-driven technologies for enterprises and has pioneered the future of machine learning systems for multiple industries. Based on decades of AI research, years of experience working with organizations of all sizes, and on learnings from over 30,000 IBM Watson engagements, IBM has developed the AI Ladder for successful artificial intelligence deployments:

IBM Watson gives enterprises the AI tools they need to transform their business systems and workflows, while significantly improving automation and efficiency. For more information on how IBM can help you complete your AI journey, explore the IBM portfolio of managed services and solutions

Sign up for an IBMid and create your IBM Cloud account.

Visit link:
What is Artificial Intelligence (AI)? - India | IBM