Archive for the ‘Artificial Intelligence’ Category

The benefits and risks of Artificial Intelligence – IT Brief Australia

In little more than 12 months, generative AI has evolved from being a technical novelty into a powerful business tool. However senior IT managers believe the technology brings with it risks as well as benefits.

According to the Immuta 2024 State of Data Security Report, 88% of senior managers say their staff are already using AI tools, regardless of whether their organisation has a firm policy of adoption.

Asked to nominate the key IT security benefits offered by AI, respondents to the Immuta survey pointed to improved phishing attack identification and threat simulation as two of the biggest. Others included anomaly detection and better audits and reporting.

When it came to identifying AI-related risks, inadvertent exposure of sensitive information by employees and unauthorised use of purpose-built models out of context were nominated by respondents. Additional named risks included the inadvertent exposure of sensitive data by large language models (LLOMs) and the poisoning of training data.

Continuing growth Despite these concerns, organisational uptake of AI appears likely to remain brisk. Analyst firm Gartner predicts that IT spending will increase more than 70% during the next year, and a significant portion will be invested in AI-related technologies and tools. Organisations will need to continue to embrace this new technology to remain competitive and relevant in todays economic landscape.

Its likely that 2024 will also become the year of the AI control system. Aside from the hype surrounding generative AI, there is a broader issue around developing a control system for the technology. This is because AI brings an entirely new paradigm where there is little or no human control. AI initiatives, therefore, wont get into full-scale production without a new form of control system in place.

At the same time, organisations will come to realise that, as AI usage increases, they need to focus even more attention on data security. As we have seen with governments around the world, there has also been an urgent need to enact news laws and regulations to ensure that data privacy and data security concerns with generative AI are addressed.

As the technology evolves, it will become clear that the key to harnessing the power of large-language model (LLM)-based AI lies in having a robust data governance framework. Such a framework is essential not only for guiding the ethical and secure use of LLMs but also for establishing standards for measuring their outputs and ensuring integrity.

The evolution of LLMs will open new avenues for applications in data analysis, customer service, and decision-making processes, further embedding LLMs into the fabric of data-driven industries.

The biggest winners when it comes to AI usage will be the organisations that create real value from better data engineering processes that are used to leverage models using their own data and business context. The key impact for these companies will be better knowledge management.

An ongoing reprioritisation and reassignment of resources With the pace of change in technology and data usage likely to continue to increase, organisations will be forced to redirect resources into new data-related areas that will become priorities. Examples include data governance and compliance, data quality, and data integration.

Despite ongoing pressure to do more with less, organisations cant and wont halt investment in IT. These investments will be focussed on the critical building blocks that form the foundation of a modern data stack that is required to support AI initiatives.

Also, the traditional demarcation between data and application layers in an IT infrastructure will be replaced by a more integrated approach focused on data products. Rather than a few dozen apps, there will be hundreds of data products. Dubbed a data-centric architecture, this approach will allow organisations to extract greater value from their data resources and better support their operations.

By working closer to the data, data teams can reduce latency and improve performance, opening up new possibilities for real-time reporting and analytics. This, in turn, supports better decision-making and more efficient business processes.

The coming year will see some fundamental changes in the way businesses manage and work with AI and data. Those that take time to experiment with the technology and determine its best use cases will be best placed to extract maximum value and achieve optimal results.

Go here to see the original:
The benefits and risks of Artificial Intelligence - IT Brief Australia

Learn the ways of machine learning with Python through one of these 5 courses and specializations – Fortune

The fastest growing jobs in the world right now are ones dealing with AI and machine learning. Thats according to the World Economic Forum.

This should come at no surprise as new technology is being deployed practically on the daily that is revolutionizing the ways in which the globe works through automation and machine intelligence.

ADVERTISEMENT

Beyond having foundational skills in mathematics and computer science and soft skills like problem-solving and communication, core to the AI and machine learning space is programmingspecifically Python. The programming language is one of the most in-demand for all tech experts.

Python plays an integral part of machine learning specialists everyday tasks, says Ratinder Paul Singh Ahuja, CTO and VP at Pure Storage. He specifically points its diverse set of libraries and their relevant roles:

As you can imagine, the best practices in the everchanging AI may differ depending on the day, task, and company. So, building foundational skills overalland being able to differentiate yourselfis important in the space.

The good news for those who are looking to learn the ropes in the machine learning and Python space, there are seemingly endless ways to gain knowledge onlineand even for free.

For those exploring the subject on your own, resources like W3Schools, Kaggle, and Googles crash course are good options. Even as simple as watching YouTube videos and checking out GitHub can be useful.

I think if you focus on core technical skills, and also the ability to differentiate, I think that theres still plenty of opportunity for AI enthusiasts to get into the market, says Rakesh Anigundi, Ryzen AI product lead at AMD.

Anigundi adds that because the field and job market is so complicated, even companies themselves are trying to figure out what are the most useful skills to build products and solve problems. So, doing anything you can to stay ahead of the game can be part of what helps propel your career.

For those looking for a little bit of a deeper dive into machine learning with Python, Fortune has listed some of the options on the market; theyre largely self-paced but vary slightly in terms of price and length.

Participants can watch hours of free videos about machine learning. At the end, each course has one learning multiple-choice question. Users are provided five different challenges to take on. The interactive projects include the creation of a book recommendation engine, neural network SMS text classifier, and cat and dog image classifier.

Cost: Free

Length: Self-paced; 36 lessons + 5 projects

Course examples: Tensorflow; Deep Learning Demystified

Hosted with edX, this introductory course allows students to learn about machine learning and AI straight from two of Harvards expert computer science professors. Participants are exposed to topics like algorithms, neutral networks, and natural language processing. Video transcripts are also notably available in nearly a dozen other languages. For those wanting to learn more, the course is part of Harvards computer science for artificial intelligence professional certificate program.

Cost: Free (certificate available for $299)

Length: 6 weeks (45 hours/week)

Course learning goals: Explore advanced data science; train models; examine result; recognize data bias

Data scientists from IBM guide students through machine learning algorithms, Python classifications techniques, and data regressions. Participants are recommended to have a working knowledge of Python, data analysis, and data visualization as well as high school-level mathematics.

Cost: $49/month

Length: 12 hours (approximately)

Module examples: Regression; Classification; Clustering

With nearly 100 hours of content, instructors from Stanford University and DeepLearning.ai, including renowned AI and edtech leader Andrew Ng, walk students through the foundations of machine learning. It also focuses on the applications of AI into the real world, especially Silicon Valley. Participants are recommended to have some basic coding experience with knowledge of high school-level mathematics.

Cost: $49/month

Length: 2 months (10 hours/week)

Course examples: Supervised Machine Learning: Regression and Classification; Advanced Learning Algorithms; Unsupervised Learning, Recommenders, Reinforcement Learning

A professor from the University of Michigans school of information and college of engineering teaches students the ins and outs of machine learning, with discussion of regressions, classifications, neural networks, and more. The course is for individuals with already some existing knowledge in the data and AI world. It is part of a larger specialization focused on data science methods and techniques.

Cost: $49/month

Length: 31 hours (approximately)

Course examples: Fundamentals of Machine Learning; Supervised Machine Learning; Evaluation

Check out all ofFortunesrankings of degree programs, and learn more about specificcareer paths.

Link:
Learn the ways of machine learning with Python through one of these 5 courses and specializations - Fortune

A Technologist Spent Years Building an AI Chatbot Tutor. He Decided It Cant Be Done. – EdSurge

When Satya Nitta worked at IBM, he and a team of colleagues took on a bold assignment: Use the latest in artificial intelligence to build a new kind of personal digital tutor.

This was before ChatGPT existed, and fewer people were talking about the wonders of AI. But Nitta was working with what was perhaps the highest-profile AI system at the time, IBMs Watson. That AI tool had pulled off some big wins, including beating humans on the Jeopardy quiz show in 2011.

Nitta says he was optimistic that Watson could power a generalized tutor, but he knew the task would be extremely difficult. I remember telling IBM top brass that this is going to be a 25-year journey, he recently told EdSurge.

He says his team spent about five years trying, and along the way they helped build some small-scale attempts into learning products, such as a pilot chatbot assistant that was part of a Pearson online psychology courseware system in 2018.

But in the end, Nitta decided that even though the generative AI technology driving excitement these days brings new capabilities that will change education and other fields, the tech just isnt up to delivering on becoming a generalized personal tutor, and wont be for decades at least, if ever.

Well have flying cars before we will have AI tutors, he says. It is a deeply human process that AI is hopelessly incapable of meeting in a meaningful way. Its like being a therapist or like being a nurse.

Instead, he co-founded a new AI company, called Merlyn Mind, that is building other types of AI-powered tools for educators.

Meanwhile, plenty of companies and education leaders these days are hard at work chasing that dream of building AI tutors. Even a recent White House executive order seeks to help the cause.

Earlier this month, Sal Khan, leader of the nonprofit Khan Academy, told the New York Times: Were at the cusp of using A.I. for probably the biggest positive transformation that education has ever seen. And the way were going to do that is by giving every student on the planet an artificially intelligent but amazing personal tutor.

Khan Academy has been one of the first organizations to use ChatGPT to try to develop such a tutor, which it calls Khanmigo, that is currently in a pilot phase in a series of schools.

Khans system does come with an off-putting warning, though, noting that it makes mistakes sometimes. The warning is necessary because all of the latest AI chatbots suffer from what are known as hallucinations the word used to describe situations when the chatbot simply fabricates details when it doesnt know the answer to a question asked by a user.

AI experts are busy trying to offset the hallucination problem, and one of the most promising approaches so far is to bring in a separate AI chatbot to check the results of a system like ChatGPT to see if it has likely made up details. Thats what researchers at Georgia Tech have been trying, for instance, hoping that their muti-chatbot system can get to the point where any false information is scrubbed from an answer before it is shown to a student. But its not yet clear that approach can get to a level of accuracy that educators will accept.

At this critical point in the development of new AI tools, though, its useful to ask whether a chatbot tutor is the right goal for developers to head toward. Or is there a better metaphor than tutor for what generative AI can do to help students and teachers?

Michael Feldstein spends a lot of time experimenting with chatbots these days. Hes a longtime edtech consultant and blogger, and in the past he wasnt shy about calling out what he saw as excessive hype by companies selling edtech tools.

In 2015, he famously criticized promises about what was then the latest in AI for education a tool from a company called Knewton. The CEO of Knewton, Jose Ferreira, said his product would be like a robot tutor in the sky that can semi-read your mind and figure out what your strengths and weaknesses are, down to the percentile. Which led Feldstein to respond that the CEO was selling snake oil because, Feldstein argued, the tool was nowhere near to living up to that promise. (The assets of Knewton were quietly sold off a few years later.)

So what does Feldstein think of the latest promises by AI experts that effective tutors could be on the near horizon?

ChatGPT is definitely not snake oil far from it, he tells EdSurge. It is also not a robot tutor in the sky that can semi-read your mind. It has new capabilities, and we need to think about what kinds of tutoring functions todays tech can deliver that would be useful to students.

He does think tutoring is a useful way to view what ChatGPT and other new chatbots can do, though. And he says that comes from personal experience.

Feldstein has a relative who is battling a brain hemorrhage, and so Feldstein has been turning to ChatGPT to give him personal lessons in understanding the medical condition and his loved-ones prognosis. As Feldstein gets updates from friends and family on Facebook, he says, he asks questions in an ongoing thread in ChatGPT to try to better understand whats happening.

When I ask it in the right way, it can give me the right amount of detail about, What do we know today about her chances of being OK again? Feldstein says. Its not the same as talking to a doctor, but it has tutored me in meaningful ways about a serious subject and helped me become more educated on my relatives condition.

While Feldstein says he would call that a tutor, he argues that its still important that companies not oversell their AI tools. Weve done a disservice to say theyre these all-knowing boxes, or they will be in a few months, he says. Theyre tools. Theyre strange tools. They misbehave in strange ways as do people.

He points out that even human tutors can make mistakes, but most students have a sense of what theyre getting into when they make an appointment with a human tutor.

When you go into a tutoring center in your college, they dont know everything. You dont know how trained they are. Theres a chance they may tell you something thats wrong. But you go in and get the help that you can.

Whatever you call these new AI tools, he says, it will be useful to have an always-on helper that you can ask questions to, even if their results are just a starting point for more learning.

What are new ways that generative AI tools can be used in education, if tutoring ends up not being the right fit?

To Nitta, the stronger role is to serve as an assistant to experts rather than a replacement for an expert tutor. In other words, instead of replacing, say, a therapist, he imagines that chatbots can help a human therapist summarize and organize notes from a session with a patient.

Thats a very helpful tool rather than an AI pretending to be a therapist, he says. Even though that may be seen as boring, by some, he argues that the technologys superpower is to automate things that humans dont like to do.

In the educational context, his company is building AI tools designed to help teachers, or to help human tutors, do their jobs better. To that end, Merlyn Mind has taken the unusual step of building its own so-called large language model from scratch designed for education.

Even then, he argues that the best results come when the model is tuned to support specific education domains, by being trained with vetted datasets rather than relying on ChatGPT and other mainstream tools that draw from vast amounts of information from the internet.

What does a human tutor do well? They know the student, and they provide human motivation, he adds. Were all about the AI augmenting the tutor.

Go here to see the original:
A Technologist Spent Years Building an AI Chatbot Tutor. He Decided It Cant Be Done. - EdSurge

Demystifying AI: The Probability Theory Behind LLMs Like OpenAI’s ChatGPT – PYMNTS.com

When a paradigm shift occurs, it is not always obvious to those affected by it.

But there is no eye of the storm equivalent when it comes to generative artificial intelligence (AI).

The technology ishere. There are already variouscommercial productsavailable fordeployment, and organizations that can effectively leverage it in support of theirbusiness goalsare likely to outperform their peers that fail to adopt the innovation.

Still, as with many innovations, uncertainty and institutional inertia reign supreme which is why understanding how the large language models (LLMs) powering AI work is critical to not just piercing the black box of the technologys supposed inscrutability, but also to applying AI tools correctly within an enterprise setting.

The most important thing to understand about the foundational models powering todays AI interfaces and giving them their ability to generate responses is the simple fact that LLMs, like Googles Bard, Anthropics Claude, OpenAIs ChatGPT and others, are just adding one word at a time.

Underneath the layers of sophisticated algorithmic calculations, thats all there is to it.

Thats because at a fundamental level, generative AI models are built to generate reasonable continuations of text by drawing from a ranked list of words, each given different weighted probabilities based on the data set the model was trained on.

Read more:There Are a Lot of Generative AI Acronyms Heres What They All Mean

While news of AI that can surpass human intelligence are helping fuel the hype of the technology, the reality is far more driven by math than it is by myth.

It is important for everyone to understand that AIlearns from data at the end of the day [AI] is merely probabilistics and statistics, Akli Adjaoute, AI pioneer and founder and general partner at venture capital fund Exponion, told PYMNTS in November.

But where do the probabilities that determine an AI systems output originate from?

The answer lies within the AI models training data. Peeking into the inner workings of an AI model reveals that it is not only the next reasonable word that is being identified, weighted, then generated, but that this process occurs on a letter by letter basis, as AI models break apart words into more manageable tokens.

That is a big part of whyprompt engineering for AI models is an emerging skillset. After all, different prompts produce different outputs based on the probabilities inherent to each reasonable continuation, meaning that to get the best output, you need to have a clear idea of where to point the provided input or query.

It also means that the data informing the weight given to each probabilistic outcome must berelevantto the query. The more relevant, the better.

See also:Tailoring AI Solutions by Industry Key to Scalability

While PYMNTS Intelligence has found that more than eight in 10 business leaders (84%) believe generative AI will positively impactthe workforce, generative AI systems are only as good as the data theyre trained on. Thats why the largest AI players are in an arms race toacquire the best training data sets.

Theres a long way to go before theres afuturistic version of AIwhere machines think and make decisions. Humans will be around for quite a while,Tony Wimmer, head of data and analytics atJ.P. Morgan Payments, told PYMNTS in March. And the more that we can write software that has payments data at the heart of it to help humans, the better payments will get.

Thats why, to train an AI model to perform to the necessary standard, many enterprises are relying ontheir own internal datato avoid compromising model outputs. By creating vertically specialized LLMs trained for industry use cases, organizations can deploy AI systems that are able to find the signal within the noise, as well as to be further fine-tuned to business-specific goals with real-time data.

AsAkli Adjaoutetold PYMNTS back in November, if you go into a field where the data is real, particularly in thepayments industry, whether its credit risk, whether its delinquency, whether its AML [anti-money laundering], whether its fraud prevention, anything that touches payments AI can bring a lot of benefit.

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.

Read the rest here:
Demystifying AI: The Probability Theory Behind LLMs Like OpenAI's ChatGPT - PYMNTS.com

The Urgent but Difficult Task of Regulating Artificial Intelligence – Amnesty International

By David Nolan, Hajira Maryam & Michael Kleinman, Amnesty Tech

The year 2023 marked a new era of AI hype, rapidly steering policy makers towards discussions on the safety and regulation of new artificial intelligence (AI) technologies. The feverish year in tech started with the launch of ChatGPT in late 2022 and ended with a landmark agreement on the EU AI Act being reached. Whilst the final text is still being ironed out in technical meetings over the coming weeks, early signs indicate the western worlds first AI rulebook goes someway to protecting people from the harms of AI but still falls short in a number of crucial areas, failing to ensure human rights protections especially for the most marginalised. This came soon after the UK Government hosted an inaugural AI Safety Summit in November 2023, where global leaders, key industry players, and select civil society groups gathered to discuss the risks of AI. Although the growing momentum and debate on AI governance is welcomed and urgently needed, the key question for 2024 is whether these discussions will generate concrete commitments and focus on the most important present-day AI risks, and critically whether it will translate into further substantive action in other jurisdictions.

Whilst AI developments do present new opportunities and benefits, we must not ignore the documented dangers posed by AI tools when they are used as a means of societal control, mass surveillance and discrimination. All too often, AI systems are trained on massive amounts of private and public datadata which reflects societal injustices, often leading to biased outcomes and exacerbating inequalities. From predictive policing tools, to automated systems used in public sector decision-making to determine who can access healthcare and social assistance, to monitoring the movement of migrants and refugees, AI has flagrantly and consistently undermined the human rights of the most marginalised in society. Other forms of AI, such as fraud detection algorithms, have also disproportionately impacted ethnic minorities, who have endured devastating financial problems as Amnesty International has already documented, while facial recognition technology has been used by the police and security forces to target racialised communities and entrench Israels system of apartheid.

So, what makes regulation of AI complex and challenging? First, there is the vague nature of the term AI itself, making efforts to regulate this technology more cumbersome. There is no widespread consensus on the definition of AI because the term does not refer to a singular technology and rather encapsulates a myriad technological applications and methods. The use of AI systems in many different domains across the public and private sector, means a large number of varied stakeholders are involved in its development and deployment, meaning such systems are a product of labour, data, software and financial inputs and any regulation must grapple with upstream and downstream harms. Further, these systems cannot be strictly considered as hardware or software, but rather their impact comes down to the context in which they are developed and implemented and regulation must take this into account.

As we enter 2024, now is the time to not only ensure that AI systems are rights respecting by design, but also to guarantee that those who are impacted by these technologies are not only meaningfully involved in decision-making on how AI technology should be regulated, but also that their experiences are continually surfaced and are centred within these discussions.

Alongside the EU legislative process, the UK, US, and others, have set out their distinct roadmaps and approach to identifying the key risks AI technologies present, and how they intend to mitigate these. Whilst there are many complexities of these legislative processes, this should not delay any efforts to protect people from the present and future harms of AI, and there are crucial elements that we, at Amnesty, know any proposed regulatory approach must contain. Regulation must be legally binding and center the already documented harms to people subject to these systems. Commitments and principles on the responsible development and use of AIthe core of the current pro-innovation regulatory framework being pursued by the UKdo not offer an adequate protection against the risks of emerging technology and must be put on statutory footing.

Similarly, any regulation must include broader accountability mechanisms over and above technical evaluations that are being pushed by industry. Whilst these may be a useful string within any regulatory toolkits bow, particularly in testing for algorithmic bias, bans and prohibitions cannot be off the table for systems fundamentally incompatible with human rights, no matter how accurate or technically efficacious they purport to be.

Others must learn from the EU process and ensure there are not loopholes for public and private sector players to circumvent regulatory obligations, and removing any exemptions for AI used within national security or law enforcement is critical to achieving this. It is also important that where future regulation limits or prohibits the use of certain AI systems in one jurisdiction, no loopholes or regulatory gaps allow the same systems to be exported to other countries where they could be used to harm the human rights of marginalized groups. This remains a glaring gap in the UK, US, and EU approaches, as they fail to take into account the global power imbalances of these technologies, especially their impact on communities in the Global Majority whose voices are not represented in these discussions. There have already been documented cases of outsourced workers being exploited in Kenya and Pakistan by companies developing AI tools.

As we enter 2024, now is the time to not only ensure that AI systems are rights-respecting by design, but also to guarantee that those who are impacted by these technologies are not only meaningfully involved in decision-making on how AI technology should be regulated, but also that their experiences are continually surfaced and are centred within these discussions.More than lip service by lawmakers, we need binding regulation that holds companies and other key industry players to account and ensures that profits do not come at the expense of human rights protections. International, regional and national governance efforts must complement and catalyse each other, and global discussions must not come at the expense of meaningful national regulation or binding regulatory standards these are not mutually exclusive. This is the level at which accountability is servedwe must learn from past attempts to regulate tech, which means ensuring robust mechanisms are introduced to allow victims of AI-inflicted rights violations to seek justice.

Read the original here:
The Urgent but Difficult Task of Regulating Artificial Intelligence - Amnesty International