Archive for the ‘Alphazero’ Category

Between Games and Apocalyptic Robots: Considering Near-Term Societal Risks of Reinforcement – Medium

With many of us stuck at home this past year, weve seen a surge in the popularity of video games. That trend hasnt been limited to humans. DeepMind and Google AI both released results from their Atari playing AIs, which have taught themselves to play over fifty Atari games from scratch, with no provided rules or guidelines. The unique thing about these new results is how general the AI agent is. While previous efforts have achieved human performance on the games they were trained to play, DeepMinds new AI Agent, MuZero could teach itself to beat humans at Atari games it had never encountered in under a day. If this reminds you of AlphaZero which taught itself to play Go then Chess well enough to outperform world champions, thats because it demonstrates an advance in the same suite of algorithms, a class of machine learning called Reinforcement Learning (RL).

While traditional machine learning parses out its model of the world (typically a small world pertaining only to the problem its designed to solve) from swathes of data, RL is real-time observation based. This means RL learns its model primarily through trial and error interactions with its environment, not by pulling out correlations from data representing a historical snapshot of it. In the RL framework, each interaction with the environment is an opportunity to build towards an overarching goal, referred to as a reward. An RL agent is trained to make a sequence of decisions on how to interact with its environment that will ultimately maximize its reward (i.e. help it win the game).

This unique iterative learning paradigm allows the AI model to change and adapt to its environment, making RL an attractive solution for open-ended, real-world problem-solving. It also makes it a leading candidate for artificial general intelligence (AGI) and has some researchers concerned about the rise of truly autonomous AI that does not align with human values. Nick Bostrom first posed what is now the canonical example of this risk among AI Safety researchers a paperclip robot with one goal: optimize the production efficiency of paperclips. With no other specifications, the agent quickly drifts from optimizing its own paperclip factory to commandeering food production supply chains for the paperclip making cause. It proceeds to place paperclips above all other human needs until all thats left of the world is a barren wasteland covered end to end with unused paper clips. The takeaway? Extremely literal problem solving combined with inaccurate problem definition can lead to bad outcomes.

This rogue AGI (albeit in more high-stakes incarnations like weapons management) is the type of harm usually thought of when trying to make RL safe in the context of society. However, between an autonomous agent teaching itself games in the virtual world and an intelligent but misguided AI putting humanity in existential risk lay a multitude of sociotechnical concerns. As RL is being rolled out in domains ranging from social media to medicine and education, its time we seriously think about these near-term risks.

How the paperclip problem will play out in the near term is likely to be rather subtle. For example, medical treatment protocols are currently popular candidates for RL modeling; they involve a series of decisions (which treatment options to try) with uncertain outcomes (different options work better for different people) that all connect to the eventual outcome (patient health). One such study tried to identify the best treatment decisions to avoid sepsis in ICU patients based off of multitudes of data, including medical histories, clinical charts and doctors notes. Their first iteration was an astounding success. With very high accuracy, it identified treatment paths that resulted in patient death. However, upon further examination and consultation with clinicians it turned out that though the agent had been allowed to learn from a plethora of potentially relevant treatment considerations, it had latched onto only one main indicator for death whether or not a chaplain was called. The goal of the system was to flag treatment paths that led to deaths, and in a very literal sense thats what it did. Clinicians only called a chaplain when a patient presented as close to death.

Youll notice that in this example, the incredibly literal yet unhelpful solution the RL agent was taking was discovered by the researchers. This is no accident. The field of modern medicine is built around the reality that connections between treatment and outcomes typically have no known causal explanations. Aspirin, for example, was used as an anti-inflammatory for over seventy years before we had any insight into why it worked. This lack of causal understanding is sometimes referred to as intellectual debt; if we cant describe why something works, we may not be able to predict when or how it will fail. Medicine has grown around this fundamental uncertainty. Through strict codes of ethics, industry standards, and regulatory infrastructure (i.e. clinical trials), the field has developed the scaffolding to minimize the accompanying harms. RL systems aiming to help with diagnosis and treatment have to develop within this infrastructure. Compliance with the machinery medicine has around intellectual debt is more likely to result in slow and steady progress, without colossal misalignment. This same level of oversight does not apply to fields like social media, the potential harms of which are hard to pin down and which have virtually no regulatory scaffolding in place.

We may have already experienced some of the early harms of RL based algorithms in complex domains. In 2018 YouTube engineers released a paper describing an RL addition to their recommendation algorithm that increased daily watch time by 6 million hours in the beta testing phase. Meanwhile, anecdotal accounts of radicalization through YouTube rabbit holes of increasingly conspiratorial content (e.g., NYTimes reporting on YouTubes role in empowering Brazils far right) were on the rise. While it is impossible to know exactly which algorithms powered the platforms recommendations at the time, this rabbit hole effect would be a natural result of an RL algorithm trying to maximize view time by nudging users towards increasingly addictive content.

In the near future, dynamic manipulation of this sort may end up at odds with established protections under the law. For example, Facebook has recently been put under scrutiny by the Department of Housing and Urban Development for discriminatory housing advertisements. The HUD suit alleges that even without explicit targeting filters that amount to the exclusion of protected groups, its algorithms are likely to hide ads from users whom the system determines are unlikely to engage with the ad, even if the advertiser explicitly wants to reach those users. Given the types of (non-RL) ML algorithms FB currently uses in advertising, proving this disparate impact would be a matter of examining the data and features used to train the algorithm. While the current lack of transparency makes this challenging, it is fundamentally possible to roll out benchmarks capable of flagging such discrimination.

If advertising were instead powered by RL, benchmarks would not be enough. An RL advertising algorithm tasked with ensuring it does not discriminate against protected classes, could easily end up making it look as though it were not discriminating instead. If the RL agent were optimized for profit and the practice of discrimination was profitable, the RL agent would be incentivized to find loopholes under which it could circumvent protections. Just like in the sepsis treatment case, the system is likely to find a shortcut towards reaching its objective, only in this case the lack of regulatory scaffolding makes it unlikely this failure will be picked up. The propensity of RL to adapt to meet metrics, while skirting over intent, will make it challenging to tag such undesirable behavior. This situation is further complicated by our heavy reliance on data as a means to flag potential bias in ML systems.

Unlike RL, traditional machine learning is innately static; it takes in loads of data, parses it for correlations, and outputs a model. Once a system has been trained, updating it to accommodate a new environment or changes to the status quo requires repeating most or all of that initial training with updated data. Even for firms that have the computing power to make such retraining seamless, the reliance on data has allowed an in for transparency. The saying goes, machine learning is like money laundering for bias. If an ML system is trained using biased or unrepresentative data, its model of the world will reflect that. In traditional machine learning, we can at least follow the marked bills and point out when an ML system is going to be prone to discrimination by examining its training data. We may even be able to preprocess the data before training the system in an attempt to preemptively correct for bias.

Since RL is generally real-time observation-based rather than training data-based, this follow-the-data approach to algorithmic oversight does not apply. There is no controlled input data to help us anticipate or correct for where an RL system can go wrong before we set it loose in the world.

In certain domains, this lack of data-born insight may not be too problematic. The more we can specify what the moving parts of a given application are and the ways in which they may failbe it through an understanding of the domain or regulatory scaffoldingthe safer it is for us to use RL. DeepMinds use of RL to lower the energy costs of its computing centers, a process ultimately governed by the laws of physics, deserves less scrutiny than the RL based K-12 curriculum generator Googles Ed Chi views as a near-term goal of the field. The harder it is to describe what success looks like within a given domain, the more prone to bad outcomes it is. This is true of all ML systems, but even more crucial for RL systems that cannot be meaningfully validated ahead of use. As regulators, we need to think about which domains need more regulatory scaffolding to minimize the fallout from our intellectual debt, while allowing for the immense promise of algorithms that can learn from their mistakes.

Follow this link:
Between Games and Apocalyptic Robots: Considering Near-Term Societal Risks of Reinforcement - Medium

Trapping the queen – Chessbase News

Today's programs are all so strong that they seem to really differ in the details more often than in a decisive statement of strength, and there is no question that when arguing the differences at the stratosphere, it seems almost ludicrous. Engine A is 3568 Elo, while Engine B is inferior because it is only 3565 Elo. So stated by the humans all hovering under 2800 barring a small fistful.

Still, the game made such a powerful impression on Peter Graysonthat he declared,

"Considering the fast time control that was quite amazing by Fat Fritz 2 and its subtlety was of a sophistication I would associate more with the human mind than an engine particularly for the follow up that confirmed the engine can execute a long term strategy.Perhaps the Fritz network does provide a more human rather than mechanical, logical approach?"

The game starts quietly, almost innocuously. An English line that has seen proponents on bothsides at the highest echelons.

Yet by move 12 they had both left most of the known cases behind, with only an Italian correspondence game cited in Mega 2021. The key move that incited so much enthusiasm and which got Black into such a dangerous situation came here:

"On the face of it this looks to be in line with the idea of controlling an open file where the controlling side tends to have an advantage.The following moves question that idea when it is a wing file and also whether it is advisable for the queen to lead on the rank that will likely be the first piece to come under attack. White's reply may not be immediately obvious until it is seen and few other engines find it, certainly not within the context of the game."

"How important this move was to the outcome of the game should not be understated. With Black seeking to gain control of the open a-file, suddenly the queen looks cut off and potentially a liability. That is the theme of the ensuing moves. Perhaps it deserves !!! What is fascinating is that Fat Fritz 2 exhibits almost human-like qualities to threaten the snaring of the queen."

While Black does manage to avoid the outright loss of the queen, it comes at a heavy price that ultimately costs the game.

White now threatens to win the queen in two moves with 30. b4 a4 31. a4. Black avoids this fate by giving up the exchange, but this in itself proves fatal.

Fat Fritz 2

Fat Fritz 2.0 is the successor to the revolutionary Fat Fritz, which was based on the famous AlphaZero algorithms. This new version takes chess analysis to the next level and is a must for players of all skill levels.

Here is the full game with the generous comments by Peter Grayson.

See more here:
Trapping the queen - Chessbase News

AI 101: All the Ways AI Could Improve or End Our World – Interesting Engineering

We are as gods and might as well get good at it. Stewart Brand, 1968

In December 2017, AlphaZero, a chess-playing, artificial intelligence (AI) developed by Google, defeated Stockfish 8, the reigning world champion program at that time. AlphaZero calculates around 80,000 moves per second, according to The Guardian. Stockfish? 70 million.

Yet, out of 100 matches, AlphaZero won 28 and tied 72.

Stockfishs open-source algorithm has been continually tweaked by human input over the years. The New Yorker reports that coders suggestan idea to update the algorithm, and the two versions are then pitted against each other for thousands of matches to see which comes out on top.

Google claims that AlphaZeros machine learning algorithm had no human input beyond the programming of the basic rules of chess. This is a type of deep learning, wherein programs carry out complex tasks without human intervention or oversight. After being taught the basics of the game, AlphaZero was then set free to teach itself how to get better.

So, how quickly was the AI able to develop its algorithm well enough to beat one of the most advanced chess programs in the world?

Four hours.

It wasnt just the speed with which it machine-learned its way to chess mastery that amazed people, either. It was AlphaZeros, for lack of a better word, creativity. Writing in The Atlantic, historian Yuval Noah Harari, author of Sapiens: A Brief History of Mankind and Homo Deus: A Brief History of Tomorrow, notes that some of AlphaZeros strategies could even be described as genius.

Everything about AlphaZero is indicative of how fast and how acute the AI revolution is likely to be. Programs like this will essentially be doing the same kind of information processing our brains do except better far better with a breadth and depth that no biological system (including the human brain) could ever hope to compete with.

Debates about consciousness and free-will aside, these programs will undoubtedly possess intelligence by at least some definition of the word. Unconstrained by biology and with a human-like ability to learn and course-correct, the potential for change is so large that it may be impossible to comprehend, let alone predict.

Yet, we do have some ideas about where we may end up.

But to truly understand what that final finish line may look like, we first need to understand what artificial intelligence really is.

What is artificial intelligence? There is no single, universally-accepted definition of AI, meaning it can be easy to get lost in the philosophical and technical woods while trying to outline it. There are, however, a few key points that researchers agree are relevant to any definition.

The Stanford Encyclopedia of Philosophy notes that many scientists and philosophers have attempted to define AI through the concept of rationality, expressed in either machine thinking or behavior. A 2019 report released by the European Commission describes the basics of how AI programming achieves that rationality through perceiving the environment, interpreting the information found within it, and then deciding on the best course of action for a particular goal, potentially altering the environment in the process.

Experts at IBM and the Massachusetts Institute of Technology (MIT) founded the MIT-IBM Watson AI Lab in 2017 and offer useful perspectives on how to think of the technology. The labs name may be familiar to you; Watson was the program that beat out two human competitors to win on the game show Jeopardy back in 2011. The lab defines AI as enabling computers and machines to mimic the perception, learning, problem-solving, and decision-making capabilities of the human mind. This is an umbrella definition that does an excellent job of encapsulating the basic idea.

Importantly, the lab then distinguishes between three AI categories. "Narrow AI" is composed of algorithms that perform specific tasks at a daunting speed. Narrow AI encompasses much of the AI technologies in existence today voice assistance technology, translation services, and those chess programs mentioned above are all examples of this type of AI.

The Watson AI Lab aims to take AI two critical steps further. First to "broad AI", which is programming systems that are able to learn with greater flexibility. And eventually to "artificial general intelligence," which are systems capable of complex reasoning and full autonomy.

This last category would be something akin to the archetypal sci-fi version of autonomous machines.

For now, most AI technology remains in the narrow classification. By looking at the current progress of that narrow AI and its benefits and risks, we can see hints of what the future might bring.

Pulling back from the slightly esoteric nature of programs like AlphaZero, Stockfish, and Watson, we see how AIs reach currently extends into the life of the average person. Millions of people use programs like Siri and Alexa every day.Chatbots help consumers troubleshoot their problems, and foreign language students and travelers around the world rely on online translation services. When you do a simple Google search, human-tweaked algorithms carefully arrange what you see and what you dont.

A trip to the hospital or clinic could put you in close contact with AI, too. In 2019, Harvard University reported that, while the majority of current medical AI applications deal in simple numerical or image-based data, such as analyzing blood pressure or MRIs, the technology is advancing to impact health in much larger ways. For example, researchers from Seoul National University Hospital and College of Medicine have developed an algorithm capable of detecting abnormalities in cell growth, including cancers. When stacked against the performance of 18 actual doctors, the algorithm outperformed 17 of them.

Meteorology is getting a boost from AI as well. A collaboration between Microsoft and the University of Washington has resulted in a weather prediction model that uses nearly 7,000 times less computing power than traditional models to generate forecasts. While those forecasts were less accurate than the most advanced models currently in use, this work represents an important step forward in cutting down the time and energy it takes to create weather and climate models, which could someday save lives.

The farming industry is another area that would benefit greatly from the development of such weather-predicting AI. And those working in agriculture are just as busy incorporating the technology into much of what they do.

Forbes reports that, by 2025, investment in smart technology for agricultural use will reach over $15 billion. This AI is starting to transform the field, bettering crop yields and lowering production costs. Coupled with drones and field-based sensors, AI is helping to generate completely new information pools the sector has never had access to before, allowing farmers to better analyze fertilizer effectiveness, improve pest management, and monitor the health of livestock.

Machine learning is even being used to create systems that mimic human characteristics like humor. In 2019, Wired reported on researchers who designed an AI capable of creating puns. It may be possible in the near future to shoot the breeze with a linguistically sharp Siri or Alexa, trading wordplay as you go. You know what they say about an eye for an AI.

All of this is exciting. Despite the game-changing levels of hope and optimism that AI is ushering in for humanitys future, however, there are unavoidable conversations regarding the dangers it could pose as well.

The risks associated with using AI are many. Its important to understand that, however bright AI could potentially make the future, it could also be used to bring about practices that would be perfectly at home in an Orwellian or Huxleyan context.

In nearly every field in which AI is being applied, important ethical questions are being raised. Critically, whatever problems AI exhibits in the future are likely to be reflections and extensions of the humans behind it.

Unfortunately, a look at those simple, everyday Google searches shows how human input can drive machine learning for better or for worse. According to reporting by The Wall Street Journal in 2019, Googles algorithms are subject to regular tinkering from executives and engineers who are trying to deliver relevant search results, while also pleasing a wide variety of powerful interests and driving its parent companys more than $30 billion in annual profit. This raises concerns of who influences what billions of search engine users see on a daily basis and how that influence might change according to undisclosed agendas.

It could lead us to a kind of propaganda and social engineering that is terrifyingly effective.

And while its true that AI is revolutionizing the medical world in life-saving ways, the benefits it brings are, likewise, subject to serious pitfalls.

Writing in the journal AI & Society in 2020, Maximilian Kiener warns that machine learning is vulnerable to cyber attacks, data mismatching, and the biases of its programmers, at the very least. Kiener references a study in which, according to scans made by an AI algorithm, black women being tested for breast cancer exhibited a lower risk of potential mutations compared to white women, despite having a similar risk in reality.

Errors like this could potentially be fatal, and they could result in specific groups and classes of people who are unable to reap the benefits of modern medicine.

As AI integrates more and more with medical technology, the disclosure of such risks to patients is imperative.

Similarly, self-driving cars are not exempt from a host of sobering technical and ethical challenges. In 2018, a self-driving Uber car hit an Arizona pedestrian who later died at the hospital from her injuries. As NBC News reports, there was no malfunction in the cars AI programming it had been trained to recognize pedestrians only at crosswalks, not when jaywalking.

It seems like a minor oversight, but once they are fully integrated into our infrastructure, AI systems that are similarly "blind" could cause a catastrophic loss of life.

AI has also found its way into the wars of the world. Militaries engaged in this generations arms race are trying to perfect the technology inautomated weapons systems. While this could certainly bring about a lot of good in terms of reducing loss of life, the question of how comfortable humanity is with machine learning deciding who lives and who dies in certain situations is one that we are already facing right now.

And in other arenas, some governments and private security organizations have already utilized facial recognition software to frightening effect. Chinas use of technology to profile Uyghur people within its borders has been raising moral eyebrows for some time, for example.

As the journal Nature reported at the end of 2020, some researchers are beginning to push back against those in the academic community who have published papers on how to build facial recognition algorithms.

Amazon, one of the largest providers of facial recognition AI to both the US government and private security organizations in China, has faced much scrutiny about the technologys relationship with civil rights abuses. In June 2020, the MIT Technology Review reported that facing public backlash as well as pressure from the American Civil Liberties Union, the company decided to suspend sales of facial recognition technology for a one-year period following similar declarations by both IBM and Microsoft. According to the BBC, Amazon is waiting on Congress to implement new rules regarding the technologys use.

For now? There is little legislation governing where and how the technology is deployed, whether it be for catching suspected criminals o undocumented immigrants or monitoring where you shop, what you buy, and who you go to dinner with.

How we design and use AI also has a tangible effect on human psychology and social cohesion, particularly with respect to the kinds of information we are shown online.

The Wall Street Journal reported back in 2018 how YouTubes algorithms recommend high-traffic videos that are more likely to keep people on the site and watching. Whether by design or not, this frequently leads viewers to consume increasingly extreme content, even when those users havent shown interest in such content. Since it has been established that the internet is particularly prone to fostering the development of conspiracy theories, worries about how these algorithms play a part in societys troubles and radicalization may be well justified.

The social ramifications could go far deeper. Dirk Helbing is a professor of computational social science at ETHZurich whose research specializes in applying computer modeling and simulations to the phenomena of social coordination, conflict, and collective opinion formation. In the book Towards Digital Enlightenment: Essays on the Dark and Light Sides of the Digital Revolution, he, Bruce Frey of the University of Kansas, and seven other researchers write lucidly on how the relationship of coder and coded is becoming a two-way street.

Some software platforms are moving towards persuasive computing. In the future, using sophisticated manipulation technologies, these platforms will be able to steer us through entirecourses of action, be it for the execution of complex work processes or to generate free content for Internet platforms, from which corporations earn billions. The trend goes from programming computers to programming people.

Yuval Noah Harari suggests similarly disconcerting scenarios. While he warns that dystopian visions of malevolent leaders monitoring citizens biometrics and psyches with AI are a distinct possibility, it might not be the one we should be most worried about:

We are unlikely to face a rebellion of sentient machines in the coming decades, he writes in The Atlantic, but we might have to deal with hordes of bots that know how to press our emotional buttons better than our mother does and that use this uncanny ability, at the behest of a human elite, to try to sell us something be it a car, a politician, or an entire ideology.

Ultimately, it is impossible to list all the benefits and risks of artificial intelligence, as the technology already impacts nearly every aspect of our lives. From what we watch, to what we buy, to how we think, and what we know. Likewise, it is impossible to know exactly where we will end up. However, the multitude of options here highlight just how important this issue is, and they make one thing abundantly clear: The decisions we make todaydictate where we end up tomorrow, which is why it is so very important to go slow andnot"move fast and break things."

The realization of general AI, something that seems to be an inevitability at this point, could end up being humanitys greatest ever technological achievement or our demise. While delivering a speech for TedTalks in Alberta, Canada in 2016, neuroscientist and AI commentator Sam Harris emphasized just how important it is to get the initial conditions of that achievement right:

When youre talking about superintelligent AI that can make changes to itself, it seems to me that we only have one chance to get [it] right. The moment we admit that information processing is the source of intelligence, [...] and we admit that we will improve these systems continuously, and we admit that the horizon of cognition very likely far exceeds what we currently know then we have to admit that were in the process of building some sort of god. Now would be a good time to make sure its a god we can live with.

Continue reading here:
AI 101: All the Ways AI Could Improve or End Our World - Interesting Engineering

Quick Scripts AlphaZero

The AlphaZero.Scripts module provides a quick way to execute common tasks with a single line of code. For example, starting or resuming a training session for the connect-four example becomes as simple as executing the following command line:

The first argument of every script specifies what experiment to load. This can be specified as an object of type Experiment or as a string from keys(Examples.experiment).

Perform some sanity checks regarding the compliance of a game with the AlphaZero.jl Game Interface.

Launch a training session where hyperparameters are altered so that training finishes as quickly as possible.

This is useful to ensure the absence of runtime errors before a real training session is started.

Start or resume a training session.

The optional keyword arguments are passed directly to the Session constructor.

Play an interactive game against the current agent.

Use the interactive explorer to visualize the current agent.

Read more here:
Quick Scripts AlphaZero

How to Kickstart an AI Venture Without Proprietary Data – Medium

AI startups have a chicken & egg problem. Heres how to solve it.

A few years ago, I learned about the billions of dollars banks lose to credit card fraud on an annual basis. Better detection or prediction of fraud would be incredibly valuable. And so I considered the possibility of convincing a bank to share their transactional data in the hope of building a better fraud detection algorithm. The catch, unsurprisingly, was that no major bank is willing to share such data. They feel theyre better off hiring a team of data scientists to work on the problem internally. My startup idea died a quick death.

Despite the tremendous innovation and entrepreneurial opportunities around AI, breaking into AI can be a daunting task for entrepreneurs as they face a chicken-and-egg problem before they even begin, something existing companies are less likely to contend with. I believe specific strategies can help entrepreneurs overcome this challenge and create successful AI-driven ventures.

Todays AI systems need to be trained on large datasets, which can pose a challenge for entrepreneurs. Established companies with a sizable customer base already have a stream of data from which they can train AI systems, build new products and enhance existing ones, generate additional data, and rinse and repeat (for example, Google Maps has over 1B monthly active users and over 20 Petabytes of data). But for entrepreneurs, the need for data poses a chicken-and-egg problem because their company hasnt yet been built, they dont have data, which means they cant create an AI product as easily.

Additionally, data is not only necessary to get started with AI, it is actually key to AI performance. Research has shown that while algorithms matter, data matters more. Among modern machine learning methods, the differences in performance between various algorithms are relatively small when compared to the performance differences between the same algorithms with more or less data (Banko and Brill 2001).

There are several strategies that can help entrepreneurs navigate this chicken-and-egg problem and access the data they need to break into the AI space.

Research has shown that while algorithms matter, data matters more.

While data does need to come before an AI product, data does not need to come before all products. Entrepreneurs can begin by creating a service that is not AI-based, but that solves customer problems and that generates data in the process. This data can later be used to train an AI system that enhances the existing service or creates a related service.

For example, Facebook didnt use AI in its early days, but it still provided a social networking platform that customers wanted to join. In the process, Facebook generated a large amount of data which was in turn used to train AI systems that helped personalize the newsfeed and also made it possible to run extremely targeted ads. Despite not being an AI-driven service at the outset, Facebook has become a heavy user of AI.

Similarly, the InsurTech startup Lemonade initially didnt have data to build sophisticated AI capabilities on day one. However, over time, Lemonade has built AI tools to create quotes, process claims, and detect fraud. Today, their AI system handles the first notice of loss for 96% of claims, and manages the full claim resolution without any human involvement in a third of the cases. These AI capabilities have been built using the data generated over many years of operations.

2. Partner with a non-tech company that has a proprietary dataset

Entrepreneurs can partner with a company or organization that has a proprietary dataset but lacks in-house AI expertise. This approach is particularly useful in contexts where it would be very difficult to create a product that in turn generates the kind of data your AI application needs, such as medical data about patient tests and diagnoses. In this case, you could partner with a hospital or insurance company in order to obtain anonymized data.

A related point is that training data for your AI product can come from a potential customer. While this is harder in regulated industries like healthcare and finance, customers in other industries like manufacturing may be more open to it. All you might need to offer in return is exclusive access to the AI product for a few months or early access to future product features.

A pitfall of this approach is that potential partners may prefer working with established companies rather than smaller players who may be less known and trusted (especially in a post- GDPR and Cambridge Analytica world). So business development will be tricky but this strategy is nonetheless feasible especially when well-known tech companies are not already chasing after your desired partner.

Entrepreneurs who are part of a family business may already have access to a potentially large amount of data from their existing business. Thats a great option too.

3. Crowdsource the (labeled) data you need

Depending on the kind of data needed, entrepreneurs can obtain data through crowdsourcing. When data is available but is not well labeled (e.g. images on the Internet), crowdsourcing can be a particularly well-suited method for obtaining this data, as labeling is a task that lends itself well to being completed quickly by a large number of individuals on crowdsourcing platforms. Platforms such as Amazon Mechanical Turk and Scale.ai are frequently used to help generate labeled training data.

For example, consider Googles use of Captchas. While they serve an important security purpose, Google simultaneously uses them as a crowdsourced image labeling system. Every day millions of users are part of the Google analytics pre-processing team which are validating machine learning algorithms- for free.

Some products have workflows which allow customers to help label new data in the course of using the product. In fact, the entire subfield of Active Learning is focused on how to interactively query users to better label new data points. For example, consider a cybersecurity product that generates alerts about risks and a workflow in which an Ops engineer resolves those alerts thereby generating new labeled data. Similarly, product recommendation services like Pandora use upvotes and downvotes to validate recommendation accuracy. In both these cases, you can start with an MVP that continually improves over time as customers provide feedback.

4. Make use of public data

Before you conclude that the data you need is not available, look harder. There is more publicly available data than you might imagine. There are even data marketplaces emerging. While publicly available data (and therefore the resulting product) might be less defensible, you can build defensibility through other service/product innovations such as creating an exceptional user experience or combining offline and digital data at scale as Zillow does (the company uses offline public municipal data at scale as part of their innovative online real estate application). One could also combine publicly available data with some proprietary data, which could be generated over time or obtained through partnerships, crowdsourcing, etc.

The Canadian company BlueDot uses a variety of data sources, including publicly available data, in order to detect outbreaks of emerging diseases before they are officially reported as well as predict where an outbreak will spread to next. BlueDot uses statements from official public health organizations, digital media, global airline ticketing data, livestock health reports, and population demographics, among other data sources. The company detected the COVID-19 outbreak on December 30th, 2019, nine days before the WHO reported on it.

There is more publicly available data than you might imagine. There are even data marketplaces emerging.

5. Rethink the need for data

It is true that most of the practical AI in the business world is based on Machine Learning. And most of that ML is supervised ML (which requires large labeled training datasets). But many problems can be solved with other AI techniques that are not reliant on data, such as reinforcement learning or expert systems.

Reinforcement learning is an ML approach in which algorithms learn by testing various actions or strategies and observing the rewards from these actions. Essentially, reinforcement learning uses experimentation to compensate for a lack of labeled training data. The original iteration of Googles Go playing software, Alpha Go, was trained on a large training dataset, but the next iteration, AlphaZero, was based on reinforcement learning and had zero training data. Yet AlphaZero beat AlphaGo (which itself beat Lee Sedol, Gos world champion).

Reinforcement learning is widely used in online personalization. Online companies frequently test and evaluate multiple website designs, product descriptions, product images, and pricing. Reinforcement learning algorithms explore new design and marketing choices and rapidly learn how to personalize user experience based on their responses.

Another approach is to use expert systems, which are simple rule-based systems that often codify rules that experts use routinely. While expert systems rarely beat well-trained ML systems for complex tasks such as medical diagnosis or image recognition, they can help break the chicken-and-egg problem and help you get started. For example, the virtual healthcare company Curai used knowledge from expert systems to create clinical vignettes, and then used these vignettes as training data for ML models (alongside data from electronic health records and other sources).

To be clear, not every intelligence problem can be cast as a reinforcement learning problem or tackled through an expert systems approach. But these are worth considering when the lack of training data has halted the development of an interesting ML product.

Entrepreneurs are most likely to develop a consistent stream of proprietary data if they start by offering a service that has value without AI and that generates data, and then use this to train an AI system. However, this strategy does require time and may not be the best fit for all situations. Depending on the nature of the startup and the kind of data that is needed, it may work better to partner with a non-tech company that has a proprietary dataset, crowdsource (labeled) data, or make use of public data. Alternatively, entrepreneurs can rethink the need for data entirely and consider taking a reinforcement learning or expert systems approach.

Read this article:
How to Kickstart an AI Venture Without Proprietary Data - Medium