Archive for the ‘Machine Learning’ Category

How can AI and Machine Learning protect identity security? – Innovation News Network

The recent advent of ChatGPT has created an explosion of interest in Artificial Intelligence (AI) and Machine Learning (ML). While everyone is theorising about the potential use of these technologies, AI and ML already accelerate identity security by streamlining processes and providing actionable insights to administrators and users.

Identity security refers to the measures and techniques used to protect an individuals or machines unique identity and sensitive information from being stolen, misused, or compromised. This type of security focuses on verifying and authenticating the identity of a human or digital user before granting access to certain systems or information. It involves several components, including authentication, authorisation, and access control.

Securing identities is critical in todays digital age as cyber threats continue to evolve and the risks associated with data breaches and identity theft become increasingly severe. Organisations and individuals must proactively protect their personal identity and sensitive information, including implementing strong authentication mechanisms, regularly monitoring and auditing access controls, and staying up-to-date with the latest security best practices and technologies.

Before looking into how Artificial Intelligence and Machine Learning benefit in bolstering identity security programmes, lets establish how AI and ML function and the main differentiators of these two technologies.

While AI and ML are both fields of computer science that deal with developing intelligent systems, theres a significant difference between these two technologies.

AI involves creating computer programmes that can perform tasks that typically require human intelligence, such as problem-solving, decision-making, and natural language processing. ML is a subfield of AI that creates algorithms which can learn and improve from data without being explicitly programmed.

The main difference between these two technologies is that AI is a broader concept encompassing different techniques and approaches. At the same time, ML is a specific application of AI that involves training algorithms to recognise patterns in data and make predictions or decisions based on that data.

Next to academic or theoretical AI research, which focuses on developing new algorithms or advancing the fields fundamental knowledge, applied and generative AI are the two branches that find practical application in day-to-day life, professional or personal.

Applied AI solutions often involve natural language processing, computer vision, or other AI techniques combined with domain-specific expertise and data. This branch is used in various fields, such as healthcare, finance, transportation, manufacturing, etc. ML falls under this branch of AI technology.

Examples of applied AI solutions include fraud detection in financial transactions, predictive maintenance in manufacturing, chatbots for customer service, recommendation systems for e-commerce, and image recognition in healthcare.

Overall, applied AI aims to bring the benefits of AI technologies to practical use cases, improving efficiency, productivity and decision-making in various industries and domains.

shutterstock/Blue Planet Studio

On the other hand, general AI refers to systems that can perform human-like tasks. It is a subset of machine learning that involves training models to generate novel outputs, such as images, videos, music or text.

Using deep learning algorithms to learn patterns and relationships within a dataset, generative AI can create new content similar in style, format, or structure. To work, these algorithms are trained on large datasets, often containing millions of examples, and can produce highly realistic and convincing outputs, as we currently observed with ChatGPT.

Generative AI has potential applications in areas such as healthcare, finance, and autonomous driving, where it can be used to generate synthetic data for testing and training AI models.

Drilling down to identity security, it is ML which can be most readily leveraged to analyse user behaviour, find and mitigate vulnerabilities, and streamline operations.

ML technology can provide valuable insights and suggestions based on data analysis, optimising workflows and reducing frustration for administrators tasked with managing identity security programmes.

There are multiple ways in which ML can be effectively applied to this field, for example, by empowering workforces, simplifying management, reducing costs, and more. With its contextual understanding, a system can automatically recommend the next step or revise workflows, leading to improved and streamlined processes, fewer human errors, and stronger overall security.

One instance of how ML benefits identity security is when evaluating access rights and usage patterns. Here, ML enables the system to recommend access throughout an identitys lifecycle, from the initial request to ongoing micro-certification campaigns.

Furthermore, many of the routine activities related to identity security can be automated, making employee onboarding faster. The system can also offer insights to entitlement owners on how a persons access compares to that of their peers and other roles, helping expedite approvals and minimise digital exhaustion for administrators and end-users.

Moreover, machine learning can detect unusual behaviour and identity anomalies that may threaten the organisation. By analysing these outliers, access revocations can be automated or used to initiate additional reviews. When developing and maintaining roles, ML can evaluate current roles, identify any similar ones that could be merged, and suggest new roles that may be advantageous.

Using analytics, AI and ML to improve enterprise identity security is critical to outpace cybersecurity threats. Rather than buzzwords, leaders want to see real-world use cases where human and machine intelligence meaningfully converge.

AI can bring several benefits to Identity Access Management (IAM), such as:

AI and ML have the potential to revolutionise identity security and speed up the adoption of related programmes by providing actionable insights and streamlining processes.

Identity security is critical in todays digital age, where cyber threats continue to evolve, and the risks associated with data breaches and identity theft become increasingly severe.

ML can automate routine activities related to identity security, detect unusual behaviour and identity anomalies, evaluate access rights and usage patterns, and offer insights to entitlement owners.

Additionally, AI algorithms can enhance security measures and enhance user experience by reducing the time and effort required to manage IAM programmes. Utilising these capabilities, organisations can quickly identify and address high-risk access and activities, ensuring regulatory compliance on an ongoing basis.

Integrating AI and ML in identity security programmes can improve efficiency, productivity, and decision-making, enabling organisations and individuals to protect their personal identity and sensitive information. Moreover, organisations can shrink their threat landscape by reducing over-privileging and human error.

Jonathan NealVP, Solutions EngineeringSaviynt

Continue reading here:
How can AI and Machine Learning protect identity security? - Innovation News Network

Tags, AI, and dimensions – KMWorld Magazine

Remember tags? Around2007, I was all overthem, and I feel no shameabout that. Well, notmuch shame. [Davidsbook, Everything Is Miscellaneous,was published in 2007. Ed.] Lettingusers apply whatever tags, or folksonomies,they wanted to digital contentblew apart constraints on knowledgethat we'd assumed for millennia were strengths of knowledge. In fact, the idea that each thing had only one real tag was the bedrock of knowledgefor thousands of years: A tomato is a vegetable, not some other thing.

Ok, nerds, youreright; a tomato is actuallya berry. But youre justproving my point: We liketo think that a thing is one thing and notany another. At least in some contexts.

Of course, before tags, we would applymultiple classifications to things: A bookabout tomatoes might get classified underrecipes, healthy foods, and the genusSolanum. But a tomato is also a classicallyred object, roundish, delicious, squishy, asource of juice, a bad thing to learn jugglingwith, something we used to throwat bad actors and corrupt politicians, andso much more.

Then, with sites that allowed userbasedtagging, users could tag tomatoeswith whatever attributes were importantto the user at that time. We can now dothis with the photos we take, the placeswe go on our maps, the applications weuse, the sites we visit, the music we listento. Tags have become so commonthat theyve faded from consciousnesssince 2007, although sometimes a cleverhashtag pops up.

While AI in the form of machine learningcan automatically apply tags, it mayreduce the need for tags. Already we cansearch for photos based on their content,colors, or even their mood and all withoutanyone attaching tags to them.

Machine learning redefinestagging

But more may be at stake. Mightmachine learning complete the conceptualjob that tagging began, leading us from adefinitional understanding of what thingsare to a highly relational view? My prediction(My motto: Someday Ill get oneright!) is that within the next few years,dimensionality is going to become animportant, everyday word.

One view of meaning is that a wordis what its definition saysit is, as if a definition werethe long way of saying whatthe word says more compactly.But thats not howwe use or hear words. InThe Empire Strikes Back,when Princess Leia says, Ilove you to Han Solo andhe replies, I know, thedefinitions of those wordscompletely miss what justtranspired.

Tagging has made clear that thingshave very different meanings in differentcontexts and to differentpeople. Definitions havetheir uses, but the timeswhen you need a dictionaryare the exception. Tags makeexplicit that what a thing is(or means) is dependent oncontext and intention.

Machine learning is gettingus further accustomedto this idea, and not just for words. Forexample, a medical diagnostic machinelearning model may have been trained onhealth records that have a wide variety ofdata in them, such as a patient's heart rate and blood pressure, weight, age, cholesterollevel, medicines theyre taking, past history, location, diet, and so forth. The more factors, the more dimensions.

Link:
Tags, AI, and dimensions - KMWorld Magazine

The AI Revolution is Upon UsAnd UC San Diego Researchers Are … – University of California San Diego

We want to have the results within a week, so that we can really accelerate decision-making for climate scientists, said Yu, who is an assistant professor in the Department of Computer Science and Engineering at the Jacobs School of Engineering and the Halcolu Data Science Institute.

Ambitious? Yes. But thats where artificial intelligence comes in. Thanks to a $3.6 million grant awarded in 2021 by the Department of Energy, Yu and two UC San Diego colleagues, Yian Ma and Lawrence Saul, have teamed up with researchers at Columbia University and UC Irvine to develop new machine learning methods that can speed up these climate models, better predict the future, and improve our understanding of climate extremes.

This work comes at a crucial time, as it becomes increasingly important that we develop an accurate understanding of how climate change is impacting our Earth, our communities and our daily livesand how to use that newfound knowledge to inform climate action. To date, the team has published more than 20 papers in both machine learning and climate science-related journals as they continue to push the boundaries of science and engineering on this highly consequential front.

To increase the accuracy of predictionsand quantify their inherent uncertaintythe team is working on customizing algorithms to embed physical laws and first principles into deep learning models, a form of machine learning that essentially imitates the function of the human brain. Its no small task, but its given them the opportunity to collaborate closely with climate scientists who are putting these machine learning methods into practical algorithms in climate modeling.

Because of this grant, we have established new connections and new collaborations to expand the impact of AI methods to climate science, said Yu. We started working on algorithms and models with the application of climate in mind, and now we can really work closely with climate scientists to validate our models.

Read more here:
The AI Revolution is Upon UsAnd UC San Diego Researchers Are ... - University of California San Diego

Godfather of AI Geoffrey Hinton quits Google and warns over dangers of machine learning – The Guardian

Google

The neural network pioneer says dangers of chatbots were quite scary and warns they could be exploited by bad actors

The man often touted as the godfather of AI has quit Google, citing concerns over the flood of fake information, videos and photos online and the possibility for AI to upend the job market.

Dr Geoffrey Hinton, who with two of his students at the University of Toronto built a neural net in 2012, quit Google this week, the New York Times reported.

Hinton, 75, said he quit to speak freely about the dangers of AI, and in part regrets his contribution to the field. He was brought on by Google a decade ago to help develop the companys AI technology.

Hintons research led the way for current systems like ChatGPT.

He told the New York Times that until last year he believed Google had been a proper steward of the technology, but that changed once Microsoft started incorporating a chatbot into its Bing search engine, and the company began becoming concerns about the risk to its search business.

Some of the dangers of AI chatbots were quite scary, he told the BBC, warning they could become more intelligent than humans and could be exploited by bad actors.

Ive come to the conclusion that the kind of intelligence were developing is very different from the intelligence we have.

So its as if you had 10,000 people and whenever one person learned something, everybody automatically knew it. And thats how these chatbots can know so much more than any one person.

Hintons concern in the short term is something that has already become a reality people will not be able to discern what is true any more with AI-generated photos, videos and text flooding the internet.

The recent upgrades to image generators such as Midjourney mean people can now produce photo-realistic images one such image of Pope Frances in a Balenciaga puffer coat went viral in March.

Hinton was also concerned that AI will eventually replace jobs like paralegals, personal assistants and other drudge work, and potentially more in the future.

Googles chief scientist, Jeff Dean said in a statement that Google appreciated Hintons contributions to the company over the past decade.

Ive deeply enjoyed our many conversations over the years. Ill miss him, and I wish him well!

As one of the first companies to publish AI Principles, we remain committed to a responsible approach to AI. Were continually learning to understand emerging risks while also innovating boldly.

It came as IBM CEO Arvind Krishna told Bloomberg that up to 30% of the companys back-office roles could be replaced by AI and automation within five years.

Krishna said hiring in areas such as human resources will be slowed or suspended, and could result in around 7,800 roles being replaced. IBM has a total global workforce of 260,000.

The Guardian has sought comment from IBM.

Last month, the Guardian was able to bypass a voice authentication system used by Services Australia using an online AI voice synthesiser, throwing into question the viability of voice biometrics for authentication.

Toby Walsh, the chief scientist at the University of New South Wales AI Institute, said people should be questioning any online media they see now.

When it comes to any digital data you see audio or video you have to entertain the idea that someone has spoofed it.

{{topLeft}}

{{bottomLeft}}

{{topRight}}

{{bottomRight}}

{{.}}

More:
Godfather of AI Geoffrey Hinton quits Google and warns over dangers of machine learning - The Guardian

After Quitting Google, ‘Godfather of AI’ Is Now Warning of Its Dangers – Gizmodo

Megalithic tech companies such as Google, Meta, and Microsoft are so obsessed with AI development it seems impossible to steer any of them toward slowing down and actually thinking about the repercussions. Now one of the most prominent faces in artificially intelligence research, former Googler Dr. Geoffrey Hinton, has come down hard on the full-spring pace of AI development, ultimately calling for some kind of global regulation.

Googles Antitrust Case Is the Best Thing That Ever Happened to AI

According to an interview with The New York Times, Hinton, an award-winning researcher on AI, neural networks, and machine learning, is no longer so comfortable pushing the boundaries of AI development without many kind of regulation or stopgap. The 75-year-old Hinton, who was a lead researcher in any aspects of AI development at Google, has come out saying It is hard to see how you can prevent the bad actors from using [AI] for bad things.

He directly compared himself to Robert Oppenheimer, who helped develop the atomic bomb for the U.S. While Oppenheimer had made statements about pursuing science for sciences sake, Hinton instead said I dont think they should scale [AI] up more until they have understood whether they can control it. He further shared his concerns that AI would lead to massive job disruptions around the world.

Hinton got his Godfather title not with any offer you cant refuse, but from decades of research on AI. This came to a head with the neural network he helped build in 2012 with two of his students at the University of Toronto. That network was a machine learning program that could teach itself to identify objects like dogs, flowers, and so on, and it became a major stepping stone for modern transformer-based AI like diffusion AI image generators and large language models.

Google had originally acquired the company formed out of Hintons Toronto-based research in 2013. This let him establish a Toronto-based element of the Google Brain team overseeing AI development. After that, Google went on an AI spending spree when it acquired deep learning company DeepMind in 2014. Hintons company, according to a 2021 Wired report, received numerous offers from tech giants including Microsoft and China-based Baidu, both of which are deep in the muck with their own push into AI development. In a March interview with CBS News, Hinton compared the recent rapid advancements in AI to the Industrial revolution or electricityor maybe the wheel.

Its unclear when Hinton made this heel-turn, but just a few months ago he was instead referring to AI as a supernaturally precocious child. He compared AI training to caterpillars feeding on nutrients to become butterflies, further calling OpenAIs GPT-4 large language model humanitys butterfly.

According to the Times, in April Hinton told Google he planned to leave, and finally cut the cord after a call with CEO Sundar Pichai last Thursday. Though The New York Times implied that Hinton had left Google in order to specifically take umbrage with his old boss, the Turing Award winner claimed he only wished to speak up on the dangers of AI, adding Google has acted very responsibly.

Hintons departure comes at a time of massive reorganization at his former company after massive layoffs. Last month, Google announced it was consolidating two of its most major AI teams together. Combining the Google Brain and DeepMind teams into one unit and also reorganized its AI leadership, with Brain lead Jeff Dean being moved to a chief scientist position while DeepMind CEO Demis Hassabis is set to take control of all AI development.

So far, the overt calls for stalling AI development have come from outside big tech. In March, hundreds of leading minds and researchers circulated an open letter demanding companies pause advanced AI systems. The letter criticized how major tech companies were locked in an out-of-control race to develop and deploy ever more powerful digital minds that nobody could predict or control. Though thats not to say folks inside these companies dont have qualms. A recent report from Bloomberg claimed that people inside Google were especially concerned with the companys Bard AI. Staff said the chatbot was so bad it was constantly providing misinformation and lies to users.

Want to know more about AI, chatbots, and the future of machine learning? Check out our full coverage of artificial intelligence, or browse our guides to The Best Free AI Art Generators, The Best ChatGPT Alternatives, and Everything We Know About OpenAIs ChatGPT.

Read more from the original source:
After Quitting Google, 'Godfather of AI' Is Now Warning of Its Dangers - Gizmodo