Archive for the ‘Alphago’ Category

What can the current EU AI approach do to overcome the challenges … – Modern Diplomacy

In the 1970s, as researchers started to grasp the intricacies of genetics, they were inevitably faced with the ethical implications of intentionally altering genes in living organisms. While no technology existed at that time to make such modifications, it was clear that its emergence was just around the corner. In 1975, a landmark conference was held in Asilomar, California, bringing together not just scientists but also about 140 professionals ranging from legal experts to writers and journalists. The goal was to address the potential risks associated with gene manipulation. The conference led to the creation of a set of guiding principles that continue to have a lasting impact today. Asilomar serves as a singular example of effective self-regulation, proactive risk mitigation, and open communication with the public.

Today, as we stand on the cusp of a new, AI-driven, era, theres again a palpable sense of anticipation across the globe, while new risks and opportunities spread out before us. AI has swiftly transitioned from being a technological novelty to a pervasive force, reshaping our daily lives and industries, ranging from pioneering projects like OpenAI to autonomous transport. he allure of generative AI applications has dwarfed past technological frenzies. While innovations like the internet, steam engine, and printing press have previously ushered in transformative epochs, AI holds the promise of instigating the most monumental shift in human history.

However, as this wave of innovation surges forward, the need for a comprehensive regulatory framework becomes increasingly urgent. An important goal, agreed by several stakeholders, should be to ensure the ethical, secure, and equitable use of AI for all. The conversation is not a hypothetical debate about the distant future; its about what must be done today to secure a prosperous future for humanity and the planet.

Numerous stakeholdersgovernments, international organisations, NGOs, and tech giants, are scrambling to address the myriad challenges posed by AI. Whether driven by genuine concern or merely to cultivate a contemporary image, different initiatives are underway. The European Commission is pioneering efforts to craft the first-ever legal framework for AI[1]. The proposed legislation establishes different rules for different risk levels and has the potential to address AI risks for the society. Yet, it is uncertain whether this European effort can address all current and especially future challenges. Two glaring gaps persist in the European legislative effort but also in numerous parallel national or international initiatives.

First, the vast majority of the current efforts are focused on the present and the impacts of narrow AI, that is the current version of AI tools capable of performing specific tasks (like ChatGPT, AlphaFold or AlphaGo). Yet, this preoccupation with narrow AI obscures the monumental, potentially catastrophic challenges presented by Artificial General Intelligence (AGI). AGI represents a form of AI with the capacity to comprehend, learn, and apply knowledge across a wide range of tasks and domains[2]. An AGI system connected to the internet and myriad sensors and smart devices could solve complex problems, seek information by any means (even directly interacting with humans), make logical deductions, and even rewrite its own code. AGI does not exist today, yet according to estimations by experts[3] could be available between 2035-2040, a timeline that coincides with the typical time needed to solidify a global AGI treaty and governance system. This synchronicity underscores the pressing need to pivot our focus, infusing foresight methodologies to discern and tackle imminent challenges and prepare for unknown ones.

The second challenge for the ongoing legislative efforts is the fragmentation. AI systems, much like living organisms, transcend political borders. Attempting to regulate AI through national or regional efforts entails a strong potential for failure, given the likely proliferation capabilities of AI. Major corporations and emerging AI startups outside the EUs control will persist in creating new technologies, making it nearly impossible to prevent European residents from accessing these advancements. In this light, several stakeholders[4] suggest that any policy and regulatory framework for AI must be established on a global scale. Additionally, Europes pursuit of continent-wide regulation poses challenges to remaining competitive in the global AI arena, if the sector enjoys a more relaxed regulatory framework in other parts of the world. Furthermore, Article 6 of the proposed EU Artificial Intelligence Act introduces provisions for high-risk AI systems, requiring developers and deployers themselves to ensure safety and transparency. However, the provisions self-assessment nature raises concerns about its effectiveness.

What must be done

In this rapidly changing and complex global landscape, is there any political space for the EU to take action? The pan-European study OurFutures[5] reveals that the vast majority of participants express deep concern about the future, with technology-related issues ranking high on their list, alongside social justice, nature, well-being, education, and community. Moreover, despite the emerging signs of mistrust towards governments globally, citizens in the EU maintain confidence in government leaders as catalysts for positive change (picture 1), while they also prioritize human condition and environment over economic prosperity.

Picture 1: Who are the changemakers and what matters more (OurFutures)

The clock is ticking, but governments still have the opportunity to address societal concerns by taking bold steps. In the case of AI, the EU should assume a leadership role in global initiatives and embrace longtermism as a fundamental value, ensuring a sustainable future for current and future generations:

EU as a global sounding board. While the European Commissions legislative initiative on A.I. signifies a leap in the right direction, structured productive collaboration with key international partners like USA, China, UNESCO and OECD is essential, with the aim to set-up a global AI regulatory framework. The success of the Asilomar conference was rooted in its ability to create a voluntary set of globally respected rules. Violators faced condemnation from the global community, exemplified by the case of He Jiankui[6], who created the worlds first genetically edited babies and was subsequently sentenced to prison. Drawing from its tradition of negotiating regulations with many diverse stakeholders, the EU should champion a global initiative under the UN to forge a consensus on AI regulation, and adapt to the diversity of approaches shown by other AI actors.

A technology monitoring system. A global technology observatory has been already suggested by the Millennium Project[7], the Carnegie Council for Ethics in International Affairs[8] and other experts. This organization should be empowered to supervise AI research, evaluate high-risk AI systems, and grant ISO-like certifications to AI systems that comply with standards. It should track technological progress and employ foresight methods to anticipate future challenges, particularly as AGI looms on the horizon. Such an entity, perhaps aptly named the International Science and Technology Organization (ISTO), building on the work done by ISO/IEC and the IEEE on ad hoc standards, could eventually extend its purview beyond AI, encapsulating fields like synthetic biology and cognitive science. Avoiding the usual challengesdissent over nuances, apprehensions of national sovereignty, and the intricate dance of geopolitics, could be done through the emergence of such an organism from the mentioned already extant standardization organizations. However, the EU, with its rich legacy, is perfectly poised to champion this cause, in close collaboration with the UN to expedite its realization.

Embrace Longtermism. Longtermism, the ethical view that prioritizes positively influencing the long-term future, is a moral imperative in an era of exponential technological advancements and complex challenges like the climate crisis. Embracing longtermism means designing policies that address risks as we transition from sub-human AI to greater-than-human AI. For the European Commission, initiatives to address AI challenges should not be viewed as mere regulation but as a unique opportunity to etch its commitment to a secure, ethical AI future into history. A longtermism perspective in AI matches with the idea of AI Alignment put forth by numerous scholars[9], which addresses diverse concerns related to AI safety, aiming to ensure that AI remains aligned with our objectives and avoids unintended consequences of going astray.

As the world races against the clock to regulate AI, the EU has the potential to be a trailblazer. EUs initiatives to address AI challenges should not be considered merely as a regulatory endeavorits an unparalleled opportunity. Embracing longtermism and spearheading the establishment of an ISTO could be EUs crowning achievement. Its time for the EU to step up, engage in proactive diplomacy, and pave the way for a sustainable AI future that respects the values and concerns of people today and tomorrow.

[1] https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai

[2] https://www.gartner.com/en/information-technology/glossary/artificial-general-intelligence-agi

[3] Macey-Dare, Rupert, How Soon is Now? Predicting the Expected Arrival Date of AGI- Artificial General Intelligence (June 30, 2023). Available at SSRN:https://ssrn.com/abstract=4496418

[4] For example https://www.forbes.com/sites/hecparis/2022/09/09/regulating-artificial-intelligenceis-global-consensus-possible/?sh=a505f237035c

[5] https://knowledge4policy.ec.europa.eu/projects-activities/ourfutures-images-future-europe_en

[6] https://www.bbc.com/news/world-asia-china-50944461

[7] https://www.millennium-project.org/projects/workshops-on-future-of-worktechnology-2050-scenarios/

[8]Global AI Observatory (GAIO) : https://www.carnegiecouncil.org/media/article/a-framework-for-the-international-governance-of-ai

[9] For example: http://lcfi.ac.uk/projects/completed-projects/value-alignment-problem/

Here is the original post:
What can the current EU AI approach do to overcome the challenges ... - Modern Diplomacy

If I had to pick one AI tool… this would be it. – Exponential View

There are so many new artificial intelligence products out there. Which ones are really worth your time?

If I had to pick one, it wouldnt be ChatGPT or Claude. It would be Perplexity.ai.

Since 1 October Ive logged more than 268 queries on Perplexity from my laptop alone (I use it on my phone, too). Its displacing a large number of my Google searches.

I decided to speak to the co-founder and CEO of Perplexity, Aravind Srinivas. Aravind and his team are hot off the heels of a $500 million funding round led by IVP.

You can watch our discussion in the video embedded in this post. The full hour-long discussion and transcript are open to paying members of Exponential View.

Of the many brilliant insights in our conversation, I was particularly excited to cover the following nine areas

Googles innovators dilemma.

The fuzzy art of shipping products built on AI models.

AI as ignition for a new era of human entrepreneurship.

Mapping out the route to AGI.

Going from from autocomplete to autopilot in the coming years.

Safety in a world with billions of AIs.

AI open-source: democratizing progress or losing control?

Beyond the technology: how do we get the public behind this journey?

Share

Azeem Azhar: Aravind, thanks for taking a few moments off the rocket ship to speak to me.

Link:
If I had to pick one AI tool... this would be it. - Exponential View

For the first time, AI produces better weather predictions — and it’s … – ZME Science

AI-generated image.

Predicting the weather is notoriously difficult. Not only are there a million and one parameters to consider but theres also a good degree of chaotic behavior in the atmosphere. But DeepMinds scientists (the same group that brought us AlphaGo and AlphaFold) have developed a system that can revolutionize weather forecasting. This advanced AI model leverages vast amounts of data to generate highly accurate predictions.

Weather forecasting, an indispensable tool in our daily lives, has undergone tremendous advancements over the years. Todays 6-day forecast is as good (if not better) than the 3-day forecast from 30 years ago. Storms and extreme weather events rarely catch people off-guard. You may not notice it because the improvement is gradual, but weather forecasting has progressed greatly.

This is more than just a convenience; its a lifesaver. Weather forecasts help people prepare for extreme events, saving lives and money. They are indispensable for farmers protecting their crops, and they significantly impact the global economy.

This is exactly where AI enters the room.

DeepMind scientists now claim theyve made a remarkable leap in weather forecasting with their GraphCast model. GraphCast is a sophisticated machine-learning algorithm that outperforms conventional weather forecasting around 90% of the time.

We believe this marks a turning point in weather forecasting, Googles researchers wrote in a study published Tuesday.

Crucially, GraphCast offers warnings much faster than standard models. For instance, in September, GraphCast accurately predicted that Hurricane Lee would make landfall in Nova Scotia nine days in advance. Currently used models predicted it only six days in advance.

The method that GraphCast uses is significantly different. Current forecasts typically use a lot of carefully defined physics equations. These are then transformed into algorithms and run on supercomputers, where models are simulated. As mentioned, scientists have this approach with great results so far.

However, this approach requires a lot of expertise and computation power. Machine learning offers a different approach. Instead of running equations on the current weather conditions, you look at the historical data. You see what type of conditions led to what type of weather. It gets even better: you can mix conventional methods with this new AI approach, and get accurate, fast readings.

Crucially, GraphCast and traditional approaches go hand-in-hand: we trained GraphCast on four decades of weather reanalysis data, from the ECMWFs ERA5 dataset. This trove is based on historical weather observations such as satellite images, radar, and weather stations using a traditional numerical weather prediction (NWP) to fill in the blanks where the observations are incomplete, to reconstruct a rich record of global historical weather, writes lead author Remi Lam, from DeepMind.

While GraphCasts training was computationally intensive, the resulting forecasting model is highly efficient. Making 10-day forecasts with GraphCast takes less than a minute on a single Google TPU v4 machine. For comparison, a 10-day forecast using a conventional approach can take hours of computation in a supercomputer with hundreds of machines.

The algorithm isnt perfect, it still lags behind conventional models in some regards (especially in precipitation forecasting). But considering how easy it is to use, its at least an excellent complement to existing forecasting tools. Theres another exciting bit about it: its open source. This means that companies and researchers can use and change it to better suit their needs.

Byopen-sourcing the model code for GraphCast,we are enabling scientists and forecasters around the world to benefit billions of people in their everyday lives. GraphCast is already being used by weather agencies, adds Lam.

The significance of this development cannot be overstated. As our planet faces increasingly unpredictable weather patterns due to climate change, the ability to accurately and quickly predict weather events becomes a critical tool in mitigating risks. The implications are far-reaching, from urban planning and disaster management to agriculture and air travel.

Moreover, the open-source nature of GraphCast democratizes access to cutting-edge forecasting technology. By making this powerful tool available to a wide range of users, from small-scale farmers in remote areas to large meteorological organizations, the potential for innovation and localized weather solutions increases exponentially.

No doubt, were witnessing another field where machine learning is making a difference. The marriage of AI and weather forecasting is not just a fleeting trend but a fundamental shift in how we understand and anticipate the whims of nature.

Read more here:
For the first time, AI produces better weather predictions -- and it's ... - ZME Science

Understanding the World of Artificial Intelligence: A Comprehensive … – Medium

Welcome to the fascinating world of Artificial Intelligence (AI). As technology continues to evolve at an unprecedented pace, AI stands at the forefront, reshaping our lives and industries. Lets dive deep into the core concepts that make AI the marvel it is today.

Algorithms are the unsung heroes of the digital age. Think of them as a chefs recipe, detailing step-by-step instructions for a computer to whip up a delightful dish. From ancient Babylonian clay tablets to todays sophisticated computer systems, algorithms have been guiding processes and decisions. For instance, the age-old Euclidean algorithm for division is still very much in use. Even our daily activities, like brushing our teeth, can be broken down into a series of algorithmic steps.

Machine Learning (ML) is like giving computers a brain of their own. Instead of spoon-feeding them every piece of information, we let them learn from patterns and data. Imagine showing a computer millions of pictures of cats and dogs. Over time, it starts recognizing the subtle differences and can classify new images with remarkable accuracy. However, while theyre pattern recognition champions, they might stumble when faced with tasks requiring intricate reasoning.

Natural Language Processing (NLP) is the art and science of making machines understand and respond to human language. If youve ever chatted with Siri or Alexa, youve experienced NLP in action. Todays advanced NLP systems can even discern the context of words. For instance, they can figure out whether club refers to a sandwich, a golf game, or a nightlife venue based on surrounding text.

Neural Networks take inspiration from the human brain. Just as our brain has neurons that transmit signals, AI has artificial neurons or nodes that communicate. These networks continuously learn and adapt. For instance, platforms like Pinterest use neural networks to curate content that resonates with users preferences.

Deep Learning is like Neural Networks on steroids. The deep signifies the multiple layers of artificial neurons. These layers enable the system to process information in a more intricate manner, making them adept at handling complex tasks.

Large Language Models (LLMs) are the maestros of text. They can summarize, create, and even predict text. These models are trained on vast amounts of data, making them incredibly versatile. They owe their efficiency to the transformer model, a groundbreaking development by Google in 2017.

Generative AI can craft content, be it text, images, or even audio. By feeding specific prompts into foundation models, we get outputs tailored to our needs. These models have given birth to innovations like OpenAIs ChatGPT and Google Bard.

Chatbots are our digital conversationalists. Powered by Generative AI, they can engage in meaningful dialogues, answer queries, and even generate content in the style of famous personalities. ChatGPT, for instance, can discuss topics ranging from history to music and offer insights on a plethora of subjects.

Hallucination in AI is when a model produces outputs that might sound plausible but arent rooted in reality. Its essential to differentiate between hallucinations and biases, as the former is an output error, while the latter stems from skewed training data.

Artificial General Intelligence (AGI) is the zenith of AI development. Its the dream of creating machines that can think, learn, and adapt just like humans. While were still on the journey towards AGI, advancements like DeepMinds AlphaGo and MuZero show promising strides in that direction.

The realm of AI is vast and ever-evolving. As we continue to harness its potential, were not just reshaping technology but also redefining the boundaries of human-machine collaboration. Embrace the journey, for the future is AI!

If you like what you read please hit a follow on our Instagram page

Follow Us on Instagram for video content AI Agenda

Here is the original post:
Understanding the World of Artificial Intelligence: A Comprehensive ... - Medium

On AI and the soul-stirring char siu rice – asianews.network

October 11, 2023

KUALA LUMPUR Limitations of traditional programming

Firstly, lets consider traditional computer programming.

Here, the computer acts essentially as a puppet, mimicking precisely the set of explicit human-generated instructions.

Take a point-of-sale system at a supermarket as an example: scan a box of Cheerios, and it charges $3; scan a Red Bull, its $2.50.

This robotic repetition of specific commands is probably the most familiar aspect of computers for many people.

This is akin to rote learning from a textbook, start to finish.

But this programmed obedience has limitationssimilar to how following a fixed recipe restricts culinary creativity.

Traditional programming struggles when faced with complex or extensive data.

A set recipe may create a delicious Beef Wellington, but it lacks the capacity to innovate or adapt.

Furthermore, not all data fits neatly into an A corresponds to Bmodel.

Take YouTube videos: their underlying messages cant be easily boiled down into basic algorithms.

This rigidity led to the advent of machine learning or AI,which emerged to discern patterns in data without being explicitly programmed to do so.

Remarkably, the core tenets of machine learning are not entirely new.

Groundwork was being laid as far back as the mid-20th century by pioneers like Alan Turing.

Laksa Penang + Ipoh

During my childhood, my mother saw the value in non-traditional learning methods.

She enrolled me in a memory training course that discouraged rote memorization.

Instead, the emphasis was on creating mind maps and making associative connections between different pieces of information.

Machine learning models operate on a similar principle. They generate their own sort of mind maps, condensing vast data landscapes into more easily navigated territories.

This allows them to form generalizations and adapt to new information.

For instance, if you type King Man + Woman into ChatGPT, it responds with Queen.

This demonstrates that the machine isnt just memorizing words, but understands the relationships between them.

In this case, it deconstructs King into something like royalty + man.

When you subtract man and add woman, the equation becomes royalty + woman, which matches Queen.

For a more localized twist, try typing Laksa Penang + Ipoh in ChatGPT. Youll get Hor Fun. Isnt that fun?

Knowledge graphs and cognitive processes

Machine learning fundamentally boils down to compressing a broad swath of world information into an internal architecture.

This enables machine learning to exhibit what we commonly recognize as intelligence, a mechanism strikingly similar to human cognition.

This idea of internal compression and reconstruction is not unique to machines.

For example, a common misconception is that our eyes function like high-definition cameras, capturing every detail within their view.

The reality is quite different. Just as machine learning models process fragmented data, our brains take in fragmented visual input and then reconstruct it into a more complete picture based on pre-existing knowledge.

Our brains role in filling in these perceptual gaps also makes us susceptible to optical illusions.

You might see two people of identical height appear differently depending on their surroundings.

This phenomenon stems from our brains reliance on built-in rules to complete the picture, and manipulating these rules can produce distortions.

Speaking of rule-breaking, recall the Go match between AlphaGo and Lee Sedol.

The human side was losing until Sedol executed a move that AlphaGos internal knowledge graph hadnt anticipated.

This led to several mistakes by the AI, allowing Sedol to win that round.

Here too, the core concept of data reconstruction is at play.

Beyond chess: The revolution in deep learning

The creation and optimization of knowledge graphs have always been a cornerstone of machine learning.

However, for a long time, this area remained our blind spot.

In the realm of chess, before the advent of deep learning, we leaned heavily on human experience.

We developed chess algorithms based on what we thought were optimal rules, akin to following a fixed recipe for a complex dish like Beef Wellington.

We believed our method was fool-proof.

This belief was challenged by Rich Sutton, a luminary in machine learning, in his blog post The Bitter Lesson.

According to Sutton, our tendency to assume that we have the world all figured out is inherently flawed and short-sighted.

In contrast, recent advancements in machine learning, including AlphaGo Zero and the ChatGPT youre interacting with now, adopt a more flexible, Char Siu Riceapproach.

They learn from raw data with minimal human oversight.

Sutton argues that given the continued exponential growth in computing power, evidenced by Moores Law, this method of autonomous learning is the most sustainable path forward for AI development.

While the concept of computers learning on their ownmight unnerve some people, lets demystify that notion.

Far from edging towardshuman-like self-awareness or sentience, these machines are engaging in advanced forms of data analysis and pattern recognition.

Machine learning models perform the complex dance of parsing, categorization, and linking large sets of dataakin to an expert chef intuitively knowing how to meld flavors and techniques.

These principles are now entrenched in our daily lives.

When you search for something on Google or receive video recommendations on TikTok, its these very algorithms at work.

So, instead of indulging in unwarranted fears about the future of machine learning, lets appreciate the advancements that bring both simplicity and complexity into our lives, much like a perfect bowl of Char Siu Rice.

Read also:

(Yuan-SenTinggraduated from Chong Hwa Independent High School in Kuala Lumpur before earning his degree from Harvard University in 2017. Subsequently, he washonoredwith a Hubble Fellowship from NASA in 2019, allowing him to pursue postdoctoral research at the Institute for Advanced Study in Princeton. Currently, he serves as an associate professor at the Australian National University, splitting his time between the School of Computing and the Research School of Astrophysics and Astronomy. His primary focus is onutilizingadvanced machine learning techniques for statistical inference in the realm of astronomical big data.)

Continue reading here:
On AI and the soul-stirring char siu rice - asianews.network