The rocket ship trajectory of a startup is well known: Get an idea, build a team and slap together a minimum viable product (MVP) that you can get in front of users.
However, todays startups need to reconsider the MVP model as artificial intelligence (AI) and machine learning (ML) become ubiquitous in tech products and the market grows increasingly conscious of the ethical implications of AI augmenting or replacing humans in the decision-making process.
An MVP allows you to collect critical feedback from your target market that then informs the minimum development required to launch a product creating a powerful feedback loop that drives todays customer-led business. This lean, agile model has been extremely successful over the past two decades launching thousands of successful startups, some of which have grown into billion-dollar companies.
However, building high-performing products and solutions that work for the majority isnt enough anymore. From facial recognition technology that has a bias against people of color to credit-lending algorithms that discriminate against women, the past several years have seen multiple AI- or ML-powered products killed off because of ethical dilemmas that crop up downstream after millions of dollars have been funneled into their development and marketing. In a world where you have one chance to bring an idea to market, this risk can be fatal, even for well-established companies.
Startups do not have to scrap the lean business model in favor of a more risk-averse alternative. There is a middle ground that can introduce ethics into the startup mentality without sacrificing the agility of the lean model, and it starts with the initial goal of a startup getting an early-stage proof of concept in front of potential customers.
However, instead of developing an MVP, companies should develop and roll out an ethically viable product (EVP) based on responsible artificial intelligence (RAI), an approach that considers the ethical, moral, legal, cultural, sustainable and social-economic considerations during the development, deployment and use of AI/ML systems.
And while this is a good practice for startups, its also a good standard practice for big technology companies building AI/ML products.
Here are three steps that startups especially the ones that incorporate significant AI/ML techniques in their products can use to develop an EVP.
Startups have chief strategy officers, chief investment officers even chief fun officers. A chief ethics officer is just as important, if not more so. This person can work across different stakeholders to make sure the startup is developing a product that fits within the moral standards set by the company, the market and the public.
They should act as a liaison between the founders, the C-suite, investors and the board of directors with the development team making sure everyone is asking the right ethical questions in a thoughtful, risk-averse manner.
Machines are trained based on historical data. If systemic bias exists in a current business process (such as unequal racial or gender lending practices), AI will pick up on that and think thats how it should continue to behave. If your product is later found to not meet the ethical standards of the market, you cant simply delete the data and find new data.
These algorithms have already been trained. You cant erase that influence any more than a 40-year-old man can undo the influence his parents or older siblings had on his upbringing. For better or for worse, you are stuck with the results. Chief ethics officers need to sniff out that inherent bias throughout the organization before it gets ingrained in AI-powered products.
Responsible AI is not just a point in time. It is an end-to-end governance framework focused on the risks and controls of an organizations AI journey. This means that ethics should be integrated throughout the development process starting with strategy and planning through development, deployment and operations.
During scoping, the development team should work with the chief ethics officer to be aware of general ethical AI principles that represent behavioral principles that are valid in many cultural and geographic applications. These principles prescribe, suggest or inspire how AI solutions should behave when faced with moral decisions or dilemmas in a specific field of usage.
Above all, a risk and harm assessment should be conducted, identifying any risk to anyones physical, emotional or financial well-being. The assessment should look at sustainability as well and evaluate what harm the AI solution might do to the environment.
During the development phase, the team should be constantly asking how their use of AI is in alignment with the companys values, whether models are treating different people fairly and whether they are respecting peoples right to privacy. They should also consider if their AI technology is safe, secure and robust and how effective the operating model is at ensuring accountability and quality.
A critical component of any machine learning model is the data that is used to train the model. Startups should be concerned not only about the MVP and how the model is proved initially, but also the eventual context and geographic reach of the model. This will allow the team to select the right representative dataset to avoid any future data bias issues.
Given the implications on society, its just a matter of time before the European Union, the United States or some other legislative body passes consumer protection laws governing the use of AI/ML. Once a law is passed, those protections are likely to spread to other regions and markets around the world.
Its happened before: The passage of the General Data Protection Regulation (GDPR) in the EU led to a wave of other consumer protections around the world that require companies to prove consent for collecting personal information. Now, people across the political and business spectrum are calling for ethical guidelines around AI. Again, the EU is leading the way after releasing a 2021 proposal for an AI legal framework.
Startups deploying products or services powered by AI/ML should be prepared to demonstrate ongoing governance and regulatory compliance being careful to build these processes now before the regulations are imposed on them later. Performing a quick scan of the proposed legislation, guidance documents and other relevant guidelines before building the product is a necessary step of EVP.
In addition, revisiting the regulatory/policy landscape prior to launch is advisable. Having someone who is embedded within the active deliberations currently happening globally on your board of directors or advisory board would also help understand what is likely to happen. Regulations are coming, and its good to be prepared.
Theres no doubt that AI/ML will present an enormous benefit to humankind. The ability to automate manual tasks, streamline business processes and improve customer experiences are too great to dismiss. But startups need to be aware of the impacts AI/ML will have on their customers, the market and society at large.
Startups typically have one shot at success, and it would be a shame if an otherwise high-performing product is killed because some ethical concerns werent uncovered until after it hits the market. Startups need to integrate ethics into the development process from the very beginning, develop an EVP based on RAI and continue to ensure AI governance post-launch.
AI is the future of business, but we cant lose sight of the need for compassion and the human element in innovation.
See more here:
MVP versus EVP: Is it time to introduce ethics into the agile startup model? - TechCrunch