What the Tech Sector Can Learn From TikTok: Trust Is Everything … – Tanium Endpoint
Life hasnt been kind to social media giant TikTok recently. First, Congress grilled it over its privacy practices. Then the UK government fined it 12.7 million (about $15.8 million) for using childrens personal data without parental consent. The list of governmentsincluding the US, EU, Canada, New Zealand, and Australiathat ban its use by employees continues to grow. And in the US, users supporting a TikTok ban outnumber those who dont by two to one.
Tiktok isnt alone. Scrutiny of the tech sector is at an all-time high. And hating on Big Tech these days is arguably the one thing Republicans and Democrats can agree on. It is time for 2023, said Sen. Amy Klobuchar (D-MN) on NBC News, when discussing recent bipartisan legislative efforts to rein in Big Tech. Let it be our resolution that we finally pass one of these bills.
In a word, it comes down to trust. For tech companies that resource is in short supply. Regaining it is now mission-critical.
Use benchmarking to compare how you rank against industry peers with a single, accurate, impact-based view of risk.
The problem: Trust requires ethical awareness, and most companiestech and nontech alikelack an ethical framework related to technology, according to Deloittes inaugural State of Ethics and Trust in Technology report, released in December.
The percentage of enterprises lacking ethical guidelines for the development and use of emerging technologies
It surveyed almost 1,800 professionals across eight sectors (including technology, financial services, healthcare, and government) on ethical approaches to emerging technologies such as autonomous vehicles, quantum computing, and augmented or virtual reality. It found 87% lacking or unaware of any ethical principles governing the development and use of emerging technology within their organizations.
While this finding applies to enterprises across the board, it is of vital importance to tech companies, now operating under heightened scrutiny and a tough economy.
Trust is the new currency in an increasingly competitive environment, says Rozita Dara, associate professor of computer science at the University of Guelph in Ontario, Canada. Trust gives organizations a competitive edge, she says.
Ethical gaps show up in different ways. TikToks manifested in its alleged misuse of data, and addictive qualities in the app that critics worry ensnare users. A Harvard University study revealed racial bias in facial recognition technology developed by IBM and Microsoft. And observers have identified gender bias in Google Translate and an Amazon AI-powered hiring recruitment system (which the company eventually scrapped).
Trust gives organizations a competitive edge.
Trust in a company depends on its commitment to the ethical use of technology. And that goes doubly for companies who actually create that technology.
We set up an expectation for what this technology is going to be and what its going to do, says Yasemin J. Erden, assistant professor at the University of Twentes Digital Society Institute in Enschede, Netherlands. In promoting the tech to users, theres a risk that the maker or vendor wont fully capture its implications. All of that impacts on the trust that people have in the technology.
We are seeing this play out now with regard to cognitive (AI) technology. In March, more than 1,100 computer scientists and other tech luminaries, including Elon Musk, signed an open letter asking all labs to suspend work on any training models more powerful than GPT-4, a large language model from research company OpenAI that powers its controversial ChatGPT service.
[Read also: Yes, ChatGPT will turbocharge hackingand help fight it, too]
Erden feels uncomfortable discussing the specifics of GPTs latest version because the makers have not been transparent enough. We dont know exactly how its doing what its doing, she warns, echoing complaints from other AI experts. So then how can we really assess the claims that are made about it? (The fact that this little-understood chatbot was quickly co-opted by cyber threat actors to create malware and other dangerous content doesnt help matters either.)
Standards and policies governing the technology industry are coming, theres no question about that. But until they take effect, it is critical that tech firms enact their own set of ethical principles, which may be used both internally (to guide the development of new trustworthy technologies) and externally (to win over consumers).
[Mozilla is] really transparenttheyre clear about what their aims are, what their limitations are, what theyre doing, and what theyre changing.
Deloittes report serves as a useful primer. It advises that company leaders meet with the actual teams completing the work. And to get the conversation started, it offers a seven-part framework to help diagnose the ethical health of a tech companys products and services. According to Deloittes technology trust ethics framework, any new technologies employed by a tech firm should be:
Defining the principles is just one part of the challenge. The other is executing them, points out Brian Green, director of technology ethics at the Markkula Center for Applied Ethics at Santa Clara University. How do you actually make these things happen in the context of a corporation when youre creating new products? he asks.
The Markkula Center has a toolkit meant for engineers and technology designers to help tackle this process. It will shortly release a handbook on applying technology ethics.
[Read also: More companies are practicing privacy by design to prioritize data securityand avoid hefty fines. Heres why you should, too]
Dara cites ethics by design as a foundational best practice. It bakes ethics into the development of a product or service from the beginning, acknowledging its entire ecosystem, including the users rights and interests. Its like adding Eth to the DevSecOps team. Developers must consider ethics as they test the functionality of a tool, evaluating its reliability and applicability.
As Deloittes report points out, the application of specific ethical principles might vary across different technologies. AI has different implications, users, and technical characteristics than, say, quantum computing, blockchain, or virtual reality.
Ethicists should drill down on the specifics based on a companys individual parameters, which raises the question: Among the companies exploring this new ethical landscape, are any doing it right?
Erden nominates Mozilla, the California-based software maker that is part foundation, part corporation.
Theyre really transparenttheyre clear about what their aims are, what their limitations are, what theyre doing, and what theyre changing, she says. I think theres a lot of respect for their platforms, like Firefox, and their ambitions.
Mozilla also invests heavily in engagement, Erden points out. It has demonstrated its commitment to ethical technology in its manifesto, with initiatives such as its Responsible Computing Challenge, its educational material on how to navigate ethical issues in the tech industry, and now its Responsible AI Challenge. These just scratch the surface.
[Read also: Lacking an ethical framework is a business risk. Here are three other pressing risksand ways to reduce them]
Green co-authored a World Economic Forum case study on ethics at IBM and cites Big Blue as a leader here. The company established an AI ethics board in 2018, published its own principles for trust and transparency, and supported them with five pillars of trust, advocating values like privacy and fairness. It has also donated tools to the open-source community to help with adversarial robustnessthat is, defending AI against misuse by attackers.
He wrote a similar analysis on Salesforce, which in 2018 developed its Office of Ethical and Humane Use of Technology, and another on Microsoft.
Of course, no company is perfect, which is why many experts in the thorny field of ethics try to avoid issuing blanket stamps of approval.
All we can do is assess individual practices, individual technologies, and individual steps, concludes Erden. Its a repetitive process that evaluates actions on their ethical merits, and should in theory encourage companies to keep striving to improve, product by product, new tech service by new tech service. Theres no end to that.
Go here to read the rest:
What the Tech Sector Can Learn From TikTok: Trust Is Everything ... - Tanium Endpoint