Archive for the ‘Alphazero’ Category

Liability Considerations for Superhuman (and – Fenwick & West LLP

A fascinating question to consider in the field of artificial intelligence is what that intelligence should resemble? Modern day deep neural networks (DNNs) do not bear resemblance to the complex network of neurons that make up the human brain; however, the building blocks of such DNNsthe artificial neuron or perceptron devised by McCulloch and Pitts back in 1943were biologically motivated and intended to mimic human neuronal firing. Alan Turings famous Turing test (or imitation game) equates intelligence with conversational indistinguishability between person and machine. Is the goal to develop AI models that reason like a person, or to create AI models capable of superhuman performance even if such performance is achieved in a foreign and unfamiliar manner? And how do these two different paths impact considerations of liability?

The answer to this question is highly contextual, and the motivations in each case are interesting and various. For instance, consider the history of AIs role in the board games chess and Go. Each games history follows the same trajectory, starting with human superiority, followed by a time in which the combination of human plus AI were the strongest players, and concluding with AI alone being dominant. Currently, giving a human some control over an AI chess or Go system only hampers performance because these AI systems play the game at a level sometimes difficult for humans to understand, such as AlphaGos so-called alien move 37 in the epic faceoff with Lee Sedol, or Alpha Zeros queen in the corner move, which DeepMind co-founder Demis Hassabis observed as like chess from another dimension. In these such cases, the inscrutability of the AIs superhuman decisions is not necessarily a problem, and recent research has shown that it has even aided humans by spurring them on to eschew traditional strategies and explore novel and winning gameplay. Of course, AI vendors should only advertise an AI model as exhibiting superhuman performance if it truly does exceed human capabilities. This is because the FTC recently issued guidance warning against exaggerating claims for AI products.

Unlike boardgames, in the high-stakes realm of medical AI, having an AI model that reasons and performs in a manner similar to humans may favorably shift the liability risk profile for those developing and using such technology. For example, patients likely want an AI model that makes a diagnosis similar to the way a typical physician does, but better (e.g., the AI is still looking for the same telltale shadows on an x-ray or the same biomarker patterns from a blood panel). The ability of medical AI models to provide such explanations is also relevant to regulators such as the FDA, which notes that an algorithms inability to provide the basis for its recommendations can disqualify the algorithm from being classified as non-device Clinical Decision Support Softwaresuch classification is desirable because it is excluded from FDAs regulatory oversight and hence reduces regulatory compliance overhead.

Another interesting example comes from researchers who demonstrated that medical AI models can possess the ability to determine the race of a patient merely by looking at a chest CT scan, even when the image is degraded to the point that a physician cannot even tell that the image is an x-ray at all. The researchers note that such inscrutable superhuman performance is actually undesirable in this case, as it may increase the risk of perpetuating or exacerbating racial disparities in the healthcare system. Hence it can sometimes be desirable to have a machine vision system see the world in a way similar to humans. But the concern is whether this might come at a cost to the performance of the AI system. Having an underperforming AI model introduces the potential for liability when such underperformance might result in harm.

Luckily, some recent research has given us reason for optimism on this point, showing that sometimes you can have your cake and eat it too. This research involves Vision Transformers (ViT), which utilize the Transformer architecture originally proposed for text-based applications back in 2017. The Transformer architecture for text played a large part in the rapid development and success of modern day large language models (such as Googles Bard), and now it is leading to great strides in the machine vision domain as well, an area that up until this point has been dominated by the convolutional neural network (CNN) architecture. The ViT in this research is substantially scaled up, with a total of 22 billion parameters; for reference, the previous record holder had four billion parameters. The ViT was also trained on a much larger dataset of four billion images, as opposed to the previously used dataset of 300 million images. For more details, the academic paper also provides the ViTs model card, essentially a nutrition label for machine learning models. This research is impressive not only because of its scale and the state-of-the-art results it achieved, but also because the resulting model exhibited an unexpected and humanlike quality, namely, a focus on shape rather than texture.

Most machine vision models demonstrate a strong texture bias. This means that, in making an image classification decision, the AI model may be focused 70%-80% on the textures of the image and only 20%-30% on the shapes in the image. This is in stark contrast to humans, who exhibit a strong 96% shape bias, with only 4% focus on texture. The ViT mentioned in the research above achieves 87% shape bias with a 13% focus on texture. Although not quite at human level, this is a radical reversal compared to previous state-of-the-art machine vision models. As the researchers note, this is a substantial improvement in the AI models alignment to human visual object recognition. This emergent humanlike capability shows that improved performance does not always need to come at the cost of inscrutability. In fact, they sometimes travel hand in hand, as is the case with this ViT which achieves impressive, if not superhuman, performance while also exhibiting improved scrutability by aligning with the human bias (or emphasis) on shape in vision recognition tasks.

So, is it safer from a liability perspective for your AI model to (a) reason like a human and perhaps suffer from some of our all-too-human underperforming flaws, or (b) exhibit superhuman performance and suffer from inscrutability? Like with so many things, the lawyerly answer is, it depends, or more specifically, it depends on the context of the AI models use. But luckily, as the aforementioned Vision Transformer research demonstrates, sometimes you can have the best of both worlds with a scrutable and high-performing AI system.

Published by PLI Chronicle.

See the original post here:
Liability Considerations for Superhuman (and - Fenwick & West LLP

Aston by-election minus one day The Poll Bludger – The Poll Bludger

A belated look at the first federal by-election since the Albanese government came to power.

Tomorrow is the day of the federal by-election for Aston, for which I have produced an overview page here. As is now customary, this site will features its acclaimed live results updates, along the format you can see on the seat pages for the New South Wales election, and may very well be the only place on the internet where you will find results reported at booth level. I discussed the by-election with Ben Raue at The Tally Room for a podcast on his website that was conducted on Monday, though there was nothing I said in it that wouldnt hold at this later remove.

The only polling Im aware of is a report yesterday for Sky News that Labor internal polling pointing to a status quo result with the Liberals retaining a margin of 52-48. However, the poll also found local voters far more favourable to Anthony Albanese (56% approval and 26% disapproval) than Peter Dutton (21% approval and 50% disapproval).

William Bowe is a Perth-based election analyst and occasional teacher of political science. His blog, The Poll Bludger, has existed in one form or another since 2004, and is one of the most heavily trafficked websites on Australian politics.View all posts by William Bowe

Go here to see the original:
Aston by-election minus one day The Poll Bludger - The Poll Bludger

No-Castling Masters: Kramnik and Caruana will play in Dortmund – ChessBase

Press release by Initiative Pro Schach

The field of participants for the NC World Masters, part of the 50th edition of the International Dortmund Chess Days Festival, has been determined. The 14th World Chess Champion, Vladimir Kramnik, and former World Championship challenger Fabiano Caruana will be playing no-castling chess at the Goldsaal of the Dortmund Westfalenhallen from 26 June.

Navigating the Ruy Lopez Vol.1-3

The Ruy Lopez is one of the oldest openings which continues to enjoy high popularity from club level to the absolute world top. In this video series, American super GM Fabiano Caruana, talking to IM Oliver Reeh, presents a complete repertoire for White.

Vladimir Kramnik already played a match of no-castling chess against Viswanathan Anand in the first edition of the event, in 2021. He is a great advocate of the chess variation and researched it early on together with Alpha Zero, the AI engine developed by DeepMind, the world-leading company in this field.

Vladimir Kramnik

Fabiano Caruana is not only a World Championship challenger, but also a three-time winner of the Dortmund super-tournament. He won the event in 2012, 2014 and 2015. His last visit to Dortmund was in 2016, when he finished in third place.

Fabiano Caruana

Last years winner, Dmitrij Kollars, will also return to Dortmund. The German national player was a late replacement at the 2022 NC World Masters and was able to adapt to the special format very quickly. Kollars celebrated the biggest success of his career by winning the tournament ahead of Viswanathan Anand.

Dmitrij Kollars

The fourth player is Pavel Eljanov. The Ukrainian impressively won the grandmaster tournament of the International Dortmund Festival two years in a row.

Master Class Vol.11: Vladimir Kramnik

This DVD allows you to learn from the example of one of the best players in the history of chess and from the explanations of the authors (Pelletier, Marin, Mller and Reeh) how to successfully organise your games strategically, consequently how to keep y

Pavel Eljanov

The organizing association, Initiative pro Schach e.V., has not only put together an absolute top field, but also invited outstanding players from previous tournaments years to the 50th anniversary. This underlines the historical significance of the chess festival for the region and the chess world.

The tournament starts on Monday, 26 June, at the Goldsaal of the Dortmund Westfalenhallen. The players will meet each opponent twice until Sunday, 2 July. Thursday is a rest day. The exact pairings will be published well in advance.

Spectators and participants of the Chess Festival will again have the chance to watch the stars up close in Dortmund. The A-Open will be played in the same room as the NC World Masters, the Goldsaal of the Dortmund Westfalenhallen.

Follow this link:
No-Castling Masters: Kramnik and Caruana will play in Dortmund - ChessBase

AI is teamwork Bits&Chips – Bits&Chips

Albert van Breemen is the CEO of VBTI.

15 March

Like with any tool, its knowing how to use it that makes a deep-learning algorithm useful, observes Albert van Bremen.

Last week, I visited a customer interested to learn more about artificial intelligence and its application in pick-and-place robots. After a quick personal introduction, I started to share some of my learnings while working for more than four years in the field of applying deep learning to high-tech systems. Somewhat proudly I explained that almost all deep-learning algorithms out there are available as open-source implementations. This means, I said, that anybody with some Python programming experience can download deep-learning models from the internet and start training. My customer promptly asked: If everything is open and accessible to any artificial-intelligence company, how do they differentiate between themselves?

The question took me a bit off-guard. After a short hesitation, I replied: In the same way that a hammer and a spade are tools that are available to everybody, not everybody can make beautiful things with them. Data and algorithms are the tools of an AI engineer. Artificial-intelligence companies can set themselves apart with their experience and knowledge of applying these tools to solve engineering problems. While my answer kept the conversation going well at that time, I needed to reflect on it later.

Having access to data and algorithms doesnt give any guarantees that you can make deep learning work. In my company, I introduced the Friday Afternoon Experiments, something I borrowed from Phillips Research when I was working there back in 2001. Everybody in my company can spend the Friday afternoon on a topic theyre interested in and think might be relevant for the company. It encourages knowledge development, innovation and work satisfaction.

I started a Friday Afternoon Experiment myself, repeating a Deepmind project. In 2016, Deepmind created an algorithm called Alphago that was the first to defeat a professional human Go player. In a short time, the algorithm developed into the more generic Alphazero algorithm, which was trained in one day to play Go, Chess and Shogi at world champion level.

The devil of deep-learning technology is in the details

It took me over three months to get my Alphazero to work for the less complex games Connect 4 and Othello. In one day, I can now train a strong Connect 4 or Othello Alphazero player. The project took way longer than I hoped for. It made me realize that the devil of deep-learning technology really is in the details. Deep-learning algorithms learn from data. But to set up the learning process and train it successfully, you must define many so-called hyper-parameters. Small changes matter a lot, and a large part of your time can be spent on finding good hyper-parameter settings. Im lucky to have an experienced team to discuss problems and bottlenecks.

Besides data and algorithms, compute power was a key success factor of Deepmind. To stay with the metaphor of tools, some AI companies have power tools that differentiate them from others. Companies like OpenAI, Deepmind and Meta have huge amounts of compute power available for deep-learning purposes. The AI trinity of dataalgorithmscompute power defines the complexity level of the problems they can solve. If all you have is a spade, you can dig a decent hole in a day. If you have an excavator, you can dig a swimming pool within the same timeframe. Huge compute power is something not all companies have access to and this is where some AI companies can differentiate. Deepmind trained Alphago using thousands of CPUs and hundreds of GPUs. I was limited during my experiment to 64 CPU cores and 1 GPU.

If youre searching for a solution to a standard problem, you can almost go to any artificial-intelligence startup. However, if you have a problem that hasnt been solved before, you need more than just data, algorithms and compute power. An experienced and dedicated team makes the difference. This might seem obvious, but AI techno-babble might easily let you think otherwise. AI is teamwork!

Read more:
AI is teamwork Bits&Chips - Bits&Chips

Resolve Strategic nuclear subs poll (open thread) The Poll Bludger – The Poll Bludger

A detailed poll on the AUKUS nuclear submarines deal finds strong support among Labor and Coalition voters alike.

The Age/Herald published a Resolve Strategic poll on Saturday concerning AUKUS and nuclear submarines, which I held back on doing a post on because I thought voting intention results might follow. That hasnt happened yet, so here goes.

As is perhaps unavoidable with the matter at hand, respondents were given fairly lengthly explanations of the relevant issues before having their opinions gauged on them, such that the results need to be considered carefully alongside what was actually asked. The first outlined the proposed acquisition and pointed out both the expense and the expectation that it would create 20,000 jobs, and found 50% in favour and 17% opposed. Breakdowns by party support found near identical results for Labor and Coalition results, with weaker support among an others category inclusive of both the Greens and minor parties of the right.

The second question asked respondents how they felt specifically about Australian submarines being nuclear-powered, finding 25% actively supportive, 39% considering the notion acceptable, and 17% actively opposed. The third put it to respondents that the federal government has hitherto being committed to spending 2% of GDP on defence, and that Anthony Albanese says he would like to spend more: 39% concurred, 31% said it should remain as is, and 9% felt it should be reduced. Finally, 46% felt large single-party states, like Russia and China were a threat to Australia, but one that could be carefully managed; 36% felt they were a threat that needed to be confronted soon; and 8% felt they were no threat at all.

The sample was conducted last Sunday to Thursday from a sample of 1600.

William Bowe is a Perth-based election analyst and occasional teacher of political science. His blog, The Poll Bludger, has existed in one form or another since 2004, and is one of the most heavily trafficked websites on Australian politics.View all posts by William Bowe

See the article here:
Resolve Strategic nuclear subs poll (open thread) The Poll Bludger - The Poll Bludger