Artificial Intelligence Will Change How We Think About Leadership – Knowledge@Wharton
The increasing attention being paid to artificial intelligence raises important questions about its integration with social sciences and humanity, according to David De Cremer, founder and director of the Centre on AI Technology for Humankind at the National University of Singapore Business School. He is the author of the recent book, Leadership by Algorithm: Who Leads and Who Follows in the AI Era?
While AI today is good at repetitive tasks and can replace many managerial functions, it could over time acquire the general intelligence that humans have, he said in a recent interview with AIfor Business (AIB),a new initiative at Analytics at Wharton. Headed by Wharton operations, information and decisions professor Kartik Hosanagar, AIB is a research initiative that focuses on helping students expand their knowledge and application of machine learning and understand the business and societal implications of AI.
According to De Cremer, AI will never have a soul and it cannot replace human leadership qualities that let people be creative and have different perspectives. Leadership is required to guide the development and applications of AI in ways that best serve the needs of humans. The job of the future may well be [that of] a philosopher who understands technology, what it means to our human identity, and what it means for the kind of society we would like to see, he noted.
An edited transcript of the interview appears below.
AI for Business: A lot is being written about artificial intelligence. What inspired you to write Leadership by Algorithm? What gap among existing books about AI were you trying to fill?
David De Cremer: AI has been around for quite some time. The term was coined in 1956 and inspired a first wave of research until the mid-1970s. But since the beginning of the 21st century more direct applications became clear and changed our attitude towards the real potential of AI. This shift was especially fueled by events where AI started to engage with world champions in chess and the Chinese game Go. Most of the attention went, and still goes to, the technology itself: that the technology acts in ways that seem to be intelligent, which is also a simple definition of artificial intelligence.
It seems intelligent in ways that humans are intelligent. I am not a computer scientist; my background is in behavioral economics. But I did notice that the integration between social sciences, humanity, and artificial intelligence was not getting as much attention as it should. Artificial intelligence is meant to create value for society that is populated by humans; the end users always must be humans. That means AI must act, think, read, and produce outcomes in a social context.
AI is particularly good at repetitive, routine tasks and thinking systematically and consistently. This already implies that the tasks and the jobs that are most likely to be taken over by AI are the hard skills, and not so much the soft skills. In a way, this observation corresponds with what is called Moravecs paradox: What is easy for humans is difficult for AI, and what is difficult for humans seems rather easy for AI.
An important conclusion is then also that in the future developments of humans, training our soft skills will become even more important and not less as many may assume. I wanted to explain that because there are many signs today especially so since COVID-19 that we need and are required to adapt more to the new technologies. As such, that puts the use and influence of AI in our society in a dominant position. As we are becoming more aware, we are moving into a society where people are being told by algorithms what their taste is, and, without questioning it too much, most people comply easily. Given these circumstances, it does not seem to be a wild fantasy anymore that AI may be able to take a leadership position, which is why I wanted to write the book.
We are moving into a society where people are being told by algorithms what their taste is, and, without questioning it too much, most people comply easily.
AIB: Is it possible to develop AI in a way that makes technology more efficient without undermining humanity? Why does this risk exist? Can it be mitigated?
De Cremer: I believe it is possible. This relates to the topic of the book as well. [It is important] that we have the right kind of leadership. The book is not only about whether AI will replace leaders; I also point out that humans have certain unique qualities that technology will never have. It is difficult to put a soul into a machine. If we could do that, we would also understand the secrets of life. I am not too optimistic that it will [become reality] in the next few decades, but we have an enormous responsibility. We are developing AI or a machine that can do things we would never have imagined years ago.
At the same time, because of our unique qualities of having and taking perspective, proactive thinking, and being able to take things into abstraction, it is up to us how we are going to use it. If you look at the leadership today, I do not see much consensus in the world. We are not paying enough attention to training our leaders our business leaders, our political leaders, and our societal leaders. We need good leadership education. Training starts with our children. [It is about] how we train them to appreciate creativity, the ability to work together with others, take perspectives from each other, and learn a certain kind of responsibility that makes our society. So yes, we can use machines for good if we are clear about what our human identity is and the value we want to create for a humane society.
AIB: Algorithms are becoming an important part of how work is managed. What are the implications?
De Cremer: An algorithm is a model that makes data intelligent, meaning it helps us to recognize the trends that are happening in the world around us, and that are captured by means of our data collections. When analyzed well, data can tell us how to deal with our environment in a better and more efficient manner. This is what Im trying to do in the business school, by seeing how we can make our business leaders more tech savvy in understanding how, where, and why to use algorithms, automation, to have more efficient decision-making.
Many business leaders have problems making business cases for why they should use AI. They are struggling to make sense of what AI can bring to their companies. Today most of them are influenced by surveys showing that as a business you have to engage in AI adoption because everyone else is doing it. But how it can benefit your own unique company is often less well understood.
Every company has data that is unique to it. You must work with that in terms of [shaping] your strategy, and in terms of the value that your company can and wishes to create. For this to be achieved, you also have to understand the values that define your company and that make it different your competitors. We are not doing a good job training our business leaders to think like this. Rather than making them think that they should become coders themselves, they should focus on becoming a bit more tech savvy so they can pursue their business strategy in line with their values in an environment where technology is part of the business process.
This implies that our business leaders do understand what an algorithm exactly does, but also what its limits are, what the potential is, and especially so where in the decision-making chain of the company AI can be used to promote productivity and efficiency. To achieve this, we need leaders who are tech savvy enough to optimize their extensive knowledge on business processes to maximize efficiency for the company and for society. It is there that I see a weakness for many business leaders today.
Without a doubt, AI will become the new co-worker. It will be important for us to decide where in the loop of the business process do you automate, where is it possible to take humans out of the loop, and where do you definitely keep humans in the loop to make sure that automation and the use of AI doesnt lead to a work culture where people feel that they are being supervised by a machine, or being treated like robots. We must be sensitive to these questions. Leaders build cultures, and in doing this they communicate and represent the values and norms the company uses to decide how work needs to be done to create business value.
AIB: Are algorithms replacing the human mind as machines replaced the body? Or are algorithms and machines amplifying the capabilities of the mind and body? Should humans worry that AI will render the mental abilities of humans obsolete or simply change them?
De Cremer: That is one of the big philosophical questions. We can refer to Descartes here, [who discovered the] body and mind [problem]. With the Industrial Revolution, we can say that the body was replaced by machine. Some people do believe that with artificial intelligence the mind will now be replaced. So, body and mind are basically taken over by machines.
We can use machines for good if we are clear about what our human identity is and the value we want to create for a humane society.
As I outlined in my book, there is more sophistication to that. We also know that the body and mind are connected. What connects them is the soul. And that soul is not a machine. The machine at this moment has no real grasp of what it means to understand its environment or how meaning can be inferred from it. Even more important in light of the idea of humanity and AI, a machine does not think about humans, or what it means to be a human. It does not care about humans. If you die today, AI does not worry about that.
So, AI does not have a connection to reality in terms of understanding semantics and deeply felt emotions. AI has no soul. That is essential for body and mind to function. We say that one plus one is three if you want to make a great team. But in this case if we say AI or machines replace the body and then replace the mind, we still have one plus one is two, but we do not have three, we dont have the magic. Because of that, I do not believe AI is replacing our mind.
Secondly, the simple definition that I postulated earlier is that artificial intelligence represents behaviors, or decisions that are being made by a machine that seem intelligent. That definition is based on the idea that machine intelligence is able to imitate the intelligent behavior that humans show. But, that machines seem able to act in ways like humans does not mean that we are talking about the same kind of intelligence and existence.
When we look at machine learning, it is modeled after neural networks. But we also know, for example, that neuroscience still knows little, maybe not even 10%, of how the brain works. So, we cannot say that we know everything and put that in a machine and argue that it replicated the human mind completely.
The simplest example I always use is that a computer works in ones and zeroes, but people do not work in ones and zeroes. When we talk about ethics with humans, things are mostly never black or white, but rather gray. As humans we are able to make sense of that gray area, because we have developed an intuition, a moral compass in the way we grew up and were educated. As a result, we can make sense of ambiguity. Computers at the moment cannot do that. Interestingly, efforts are being made today to see whether we can train machines like we educate children. If that succeeds, then machines will come closer to dealing with ambiguity as we do.
AIB: What implications do these questions have for leadership? What role can leaders play in encouraging the design of better technology that is used in wiser rather than smarter ways?
That machines seem able to act in ways like humans does not mean that we are talking about the same kind of intelligence and existence.
De Cremer: I make a distinction between managers and leaders. When we talk about running an organization, you need both management and leadership. Management provides the foundation for companies to work in a stable and orderly manner. We have procedures so we can make things a little bit more predictable. Since the early 20th century, as companies grew in size, you had to manage companies and [avoid] chaos. Management is thus the opposite of chaos. It is about structuring and [bringing] order to chaos by employing metrics to assess goals and KPIs are achieved in more or less predictable ways. In a way, management as we know it, is a status-quo maintaining system.
Leadership, however, is not focused on the status quo but rather deals with change and the responsibility to give direction to deal with the chaos that comes along with change. That is why it is important for leadership to be able to adapt, to be agile, because once things change, as a leader you are looked upon to [provide solutions]. That is where our abilities to be creative, to think in proactive ways, understand what value people want to see and to adapt to ensure that this kind of value is achieved when change sets in.
AI will be extremely applicable to management because management is consistent, it tries to focus on the status quo, and because of its repetitiveness it is in essence a pretty predictable activity and this is basically also how an algorithm works. AI is already doing this kind of work by predicting the behavior of employees, whether they will leave the company, or whether they are still motivated to do their job. Many managerial decisions are where I see algorithms can play a big role. It starts as AI being an advisor, providing information, but then slowly moving into management jobs. I call this management by algorithm MBA. Theoretically and from a practical point of view, this will happen, because AI as we know it today in organizations is good at working with stationary data sets. It, however, has a problem dealing with complexities. This is where AI, as we know it today, falls short on the leadership front.
Computer scientists working in robotics and with self-driving cars say the biggest challenge for robots is interacting with people, physical contact, and coordinating their movements with the execution of tasks. Basically, it is more difficult for robots to work within the context of teams than sending a robot to Mars. The reason for this is that the more complex the environment, the more likely it is that robots will make mistakes. As we are less tolerant to having robots inflict harm on humans, it thus becomes a dangerous activity to have autonomous robotsand vehicles interacting with humans.
Leadership is about dealing with change. It is about making decisions that you know are valuable to humans. You need to understand what it means to be a human, that you can have human concerns, taking into account that you can be compassionate, and you can be humane. At the same time, you need to be able to imagine and be proactive, because your strategy in a changing situation may need to be adjusted to create the same value. You need to be able to make abstraction of this, and AI is not able to do this.
AIB: I am glad you brought up the question of compassion. Do you believe that algorithm-based leadership is capable of empathy, compassion, curiosity, or creativity?
[Artificial intelligence] has a problem dealing with complexities. This is where AI, as we know it today, falls short on the leadership front.
De Cremer: Startups and scientists are working on what we call affective AI. Can AI detect and feel emotions? Conceptually it is easy to understand. So, yes, AI will be able to detect emotions, as long as we have enough training data available. Of course, emotions are complex also to humans so, really understanding what emotions signify to the human experience, thats something AI will not be able to do (at least in decades to come). As I said before, AI does not understand what it means to be human, so, taking the emotional intelligence perspective of what makes us human is clearly a limit for machines. That is also why we call it artificial intelligence. It is important to point out that we can also say that humans have an AI; I call that authentic intelligence.
At this moment AI does not have authentic intelligence. People believe that AI systems cannot have authentic emotions and an authentic sense of morality. It is impossible because they do not have the empathic and existential qualities people are equipped with. Also, I am not too sure that algorithms achieve authentic intelligence easily given the fact that they do not have a soul. So, if we cannot infuse them with a common sense that corresponds to the common sense of humans, which can make sense of gray zones and ambiguity, I dont think they can develop a real sense of empathy, which is authentic and genuine.
What they can learn and that is because of the imitation principle is what we call surface-level emotions. They will be able to respond, they will scan your face, they will listen to the tone of your voice, and they will be able to identify categories of emotions and respond to it in ways that humans usually respond to. That is a surface-level understanding of the emotions that humans express. And I do believe that this ability will help machines to be efficient in most interactions with humans.
Why will it work? Because as humans we are very attuned to the ability of our interaction partners to respond to our emotions. So almost immediately and unconsciously, when someone pays attention to us, we reciprocate. Recognizing surface-level emotions would already do the trick. The deeper-level emotions correspond with what I call authentic intelligence, which is genuine, and an understanding of those type of emotions is what is needed to develop friendships and long-term connections. AI as we know it today is not even close to such an ability.
With respect to creativity, it is a similar story. Creativity means bringing forward a new idea, something that is new and meaningful to people. It solves a problem that is useful, and it makes sense to people. AI can play a role there, especially in identifying something new. Algorithms are much faster than humans in connecting information because they can scan, analyze, and observe trends in data so much faster than we do. So, in the first stage of creativity, yes, AI can bring things we know together to create a new combination so much faster and better than humans. But, humans will be needed to assess whether the new combination makes sense to solve problems humans want to solve. Creative ideas gain in value when they become meaningful to people and therefore human supervision as the final step in the creativity process will be needed.
One of the concerns we have today is that machines are not reducing inequality but enhancing it.
Let me illustrate this point with the following example: Experiments have been conducted where AI was given several ingredients to make pizzas, and some pizzas turned out to be attractive to humans, but other pizzas ended up being products that humans were unlikely to eat, like pineapple with marmite. Marmite is popular in the U.K. and according to the commercials, people love it or hate it, so, its a difficult ingredient. AI, however, does not think about whether humans will like such products or find them useful it just identifies new combinations. So, the human will always be needed to determine whether such ideas will at the end of the day be useful and regarded as a meaningful product.
AIB: What are the limits to management by algorithm?
De Cremer: When we look at it from the narrow point of view of management, there are no limits. I believe that AI will be able to do almost any managerial task in the future. That is because of the way we define management as being focused on the idea of creating stability, order, consistency, predictability, by means of using metrics (e.g., KPIs).
AIB: How can we move towards a future where algorithms may not lead but still be at the service of humanity?
De Cremer: First, all managers and leaders will have to understand what AI is. They must understand AIs potential and its limits where humans must jump in and take responsibility. Humanity is important. We have to make sure that people not only look at technology from a utility perspective, where it can make a company run more efficiently because it reduces cost by not having to hire too many employees or not training people anymore to do certain tasks.
I would like to see a society where people become much more reflective. The job of the future may well be [that of] a philosopherone who understands technology, what it means to our human identity, and what it means to the kind of society we would like to see. AI also makes us think about who we are as a species. What do we really want to achieve? Once we make AI a coworker, once we make AI a kind of citizen of our societies, I am sure the awareness of the idea Us versus them will become directive in the debates and discussions of the kind of institutes, organizations and society we would like to see. I called this awareness the new diversity in my book. Humans versus non-humans, or machines: It makes us think also about who we are, and we need that to determine what kind of value we want to create. That value will determine how we are going to use our technology.
One of the concerns we have today is that machines are not reducing inequality but enhancing it. For example, we all know that AI, in order to learn, needs data. But is data widely available to everyone or only a select few? Well, if we look at the usual suspects Amazon, Facebook, Apple and so forth we see that they own most of the data. They applied a business model where the customer became the product itself. Our data are valuable to them. As a result, these companies can run more sophisticated experiments, which are needed to improve our AI which means that technology is also in the hands of a few. Democracy of data does not exist today. Given the fact that one important future direction in AI research is to make AI more powerful in terms of processing and predicting, obviously a certain fear exists that if we do not manage AI well, and we dont think about it in terms of [whether] it is good for society as a whole, we may run into risks. Our future must be one where everyone can be tech-savvy but not one that eliminates our concerns and reflections on human identity. That is the kind of education I would like to see.
View original post here:
Artificial Intelligence Will Change How We Think About Leadership - Knowledge@Wharton
- 2 Artificial Intelligence (AI) Stocks to Sell Before They Fall 40% and 55%, According to Wall Street Analysts - Yahoo Finance - March 9th, 2026 [March 9th, 2026]
- Artificial intelligence and earth observation: From innovation to services - European Commission - defence-industry-space.ec.europa.eu - March 9th, 2026 [March 9th, 2026]
- What Are the 2 Top Artificial Intelligence (AI) Stocks to Buy Right Now? - Yahoo Finance - March 9th, 2026 [March 9th, 2026]
- Dueling documentaries illuminate the promise and perils of artificial intelligence - Local News Matters - March 9th, 2026 [March 9th, 2026]
- A Case Series From a Multicentric Study: Can Artificial Intelligence (AI)-Enabled Chest X-Ray Assist in the Incidental Detection of Early-Stage Lung... - March 9th, 2026 [March 9th, 2026]
- The Artificial Intelligence (AI) Stock That Smart Money Is Buying This March - The Motley Fool - March 9th, 2026 [March 9th, 2026]
- 1 Artificial Intelligence (AI) Stock to Buy Before It Soars 74% to Join Nvidia as a $4 Trillion-Dollar Company - The Motley Fool - March 9th, 2026 [March 9th, 2026]
- 2 Artificial Intelligence (AI) Stocks to Sell Before They Fall 40% and 55%, According to Wall Street Analysts - The Motley Fool - March 9th, 2026 [March 9th, 2026]
- Texas Joins the AI Regulation Wave: Key Employer Takeaways from the Texas Responsible Artificial Intelligence Governance Act - The National Law Review - March 9th, 2026 [March 9th, 2026]
- Public records in the age of artificial intelligence - Albuquerque Journal - March 9th, 2026 [March 9th, 2026]
- Artificial intelligence and the future of fetal heart rate monitoring - KevinMD.com - March 9th, 2026 [March 9th, 2026]
- Watch Who Will Build the Future of Artificial Intelligence? - Bloomberg - March 9th, 2026 [March 9th, 2026]
- WOMEN, PEACE AND SECURITY: Womens Leadership in Addressing Emerging Threats to Peace and Security: Artificial Intelligence and Technology-Facilitated... - March 9th, 2026 [March 9th, 2026]
- Meet the Artificial Intelligence (AI) ETF With 20% of Its Portfolio Parked in Alphabet, Nvidia, Micron, and Amazon - Yahoo Finance - March 9th, 2026 [March 9th, 2026]
- BMO says this cloud stock is an early winner of agentic artificial intelligence boom - CNBC - March 9th, 2026 [March 9th, 2026]
- Breaking in with artificial intelligence - The N'West Iowa REVIEW - March 9th, 2026 [March 9th, 2026]
- 3 Top Artificial Intelligence Stocks to Buy Right Now - The Motley Fool - March 9th, 2026 [March 9th, 2026]
- Researchers Create Humanitys Last Exam to Test the Limits of Artificial Intelligence - The Debrief - March 9th, 2026 [March 9th, 2026]
- 2 Artificial Intelligence (AI) Stocks to Sell Before They Fall 40% and 55%, According to Wall Street Analysts - Nasdaq - March 9th, 2026 [March 9th, 2026]
- CSW70 Side Event on Automating Justice: Can Artificial Intelligence Increase Womens and Girls Access to Justice? - EEAS - March 9th, 2026 [March 9th, 2026]
- Responsiveness is trusting that with the intention to learn and to transform, the use of artificial intelligence can find balance - facebook.com - March 9th, 2026 [March 9th, 2026]
- Artificial Intelligence (AI) and Nuclear Energy Could Make This Engineering and Construction Stock a Big Winner - The Motley Fool - March 9th, 2026 [March 9th, 2026]
- As Hollywood's concern around artificial intelligence grows, Seth MacFarlane is justifying the technology in 'Ted' after transforming into Bill... - March 9th, 2026 [March 9th, 2026]
- Watch the video: How is artificial intelligence used in warfare right now? - Euronews.com - March 9th, 2026 [March 9th, 2026]
- 2 Popular Artificial Intelligence (AI) Stocks to Sell Before They Drop by as Much as 94%, According to Select Wall Street Analysts - The Motley Fool - March 9th, 2026 [March 9th, 2026]
- Risks Of AI For Bangladesh | Will artificial intelligence widen inequality? - The Daily Star - March 9th, 2026 [March 9th, 2026]
- The Top Artificial Intelligence (AI) Stocks to Buy With $1,000 Right Now - Nasdaq - March 2nd, 2026 [March 2nd, 2026]
- Artificial intelligence makes Xray spectroscopy five times faster, smarter and less prone to human error - anl.gov - March 2nd, 2026 [March 2nd, 2026]
- Artificial intelligence will not replace real estate agents it will divide them - Chicago Agent Magazine - March 2nd, 2026 [March 2nd, 2026]
- Is Artificial Intelligence in Charge of Nuclear Weapons? - CounterPunch - March 2nd, 2026 [March 2nd, 2026]
- Researchers say artificial intelligence is being used in swatting attacks - KETV - March 2nd, 2026 [March 2nd, 2026]
- The Top Artificial Intelligence (AI) Stocks to Buy With $1,000 Right Now - The Motley Fool - March 2nd, 2026 [March 2nd, 2026]
- Protecting Attorney-Client Privilege in the Age of Artificial Intelligence - Law.com - March 2nd, 2026 [March 2nd, 2026]
- U.S. Postal Inspectors Warn Customers to Avoid Scams that Use Artificial Intelligence - PR Newswire - March 2nd, 2026 [March 2nd, 2026]
- Honor Week panel discusses the future of artificial intelligence in academic integrity - The Cavalier Daily - March 2nd, 2026 [March 2nd, 2026]
- 2 Top Artificial Intelligence Stocks to Buy in March - The Motley Fool - March 2nd, 2026 [March 2nd, 2026]
- 2 Top Artificial Intelligence Stocks to Buy in March - Yahoo Finance - March 2nd, 2026 [March 2nd, 2026]
- Report suggests ways for Arkansas government to use artificial intelligence - The Arkansas Democrat-Gazette - March 2nd, 2026 [March 2nd, 2026]
- Why the U.S. Needs the UN in the Age of Artificial Intelligence - Better World Campaign - March 2nd, 2026 [March 2nd, 2026]
- 'Our AI Does Everything!' The Risks of Overstating the Use of Artificial Intelligence - Law.com - March 2nd, 2026 [March 2nd, 2026]
- SEO. How Googles artificial intelligence is changing the way we get information - Revista Merca2.0 - March 2nd, 2026 [March 2nd, 2026]
- Struggling to Pick Artificial Intelligence (AI) Stocks? You're Not Alone -- Try This ETF Instead - Nasdaq - March 2nd, 2026 [March 2nd, 2026]
- Background artificial intelligence: What it is and how it works - Root-Nation.com - March 2nd, 2026 [March 2nd, 2026]
- This Artificial Intelligence (AI) Crypto Is Up 140% Over the Past 90 Days, But Is It a Buy? - The Motley Fool - March 2nd, 2026 [March 2nd, 2026]
- Prediction: This Artificial Intelligence (AI) Stock Will Join Nvidia, Apple, and Alphabet in the $3 Trillion Club Before 2028 - The Motley Fool - March 2nd, 2026 [March 2nd, 2026]
- Jame Abraham: Exploring the Impact of Artificial Intelligence in Cancer Care - Oncodaily - March 2nd, 2026 [March 2nd, 2026]
- Struggling to Pick Artificial Intelligence (AI) Stocks? You're Not Alone -- Try This ETF Instead - The Motley Fool - March 2nd, 2026 [March 2nd, 2026]
- Palantir Billionaire Peter Thiel Sells 2 Artificial Intelligence (AI) Stocks That Wall Street Says Are Undervalued - Nasdaq - March 2nd, 2026 [March 2nd, 2026]
- Palantir Billionaire Peter Thiel Sells 2 Artificial Intelligence (AI) Stocks That Wall Street Says Are Undervalued - Yahoo Finance - March 2nd, 2026 [March 2nd, 2026]
- Knowledge, Perception, and Willingness to Use Artificial Intelligence for Personalized Health Recommendations Among Undergraduate Health Students: A... - March 2nd, 2026 [March 2nd, 2026]
- IOSCO announces Call for Applications for its first TechSprint on Investor Education in the Age of Artificial Intelligence (AI) - Disruption Banking - March 2nd, 2026 [March 2nd, 2026]
- Should You Forget Micron and Buy This Other Artificial Intelligence (AI) Chip Stock Instead? - Nasdaq - February 27th, 2026 [February 27th, 2026]
- NU faculty discuss artificial intelligence, enrollment statistics at first faculty senate meeting of the semester - The Huntington News - February 27th, 2026 [February 27th, 2026]
- Artificial Intelligence is causing headaches in the smartphone industry - KTVN - February 27th, 2026 [February 27th, 2026]
- Artificial Intelligence News for the Week of February 27; Updates from Red Hat, Splunk, VAST Data & More - solutionsreview.com - February 27th, 2026 [February 27th, 2026]
- The Ferry Dock Scribbler: Artificial Intelligence, redux - Block Island Times - February 27th, 2026 [February 27th, 2026]
- Should You Forget Micron and Buy This Other Artificial Intelligence (AI) Chip Stock Instead? - The Motley Fool - February 27th, 2026 [February 27th, 2026]
- Prediction: The Artificial Intelligence (AI) "Pick and Shovel" Trade Isn't Over. Here Are 2 Stocks to Buy for 2026. - The Motley Fool - February 27th, 2026 [February 27th, 2026]
- Wall Street's Secret Weapon: This Artificial Intelligence (AI) Stock for 2026 - The Motley Fool - February 27th, 2026 [February 27th, 2026]
- Nanoparticles and artificial intelligence can help researchers detect pollutants in water, soil and blood - The Conversation - February 27th, 2026 [February 27th, 2026]
- A critical Artificial Intelligence-generated content approach for the reconstruction of Qing Palace interiors: the case of Juanqinzhai - Nature - February 27th, 2026 [February 27th, 2026]
- Prediction: The Artificial Intelligence (AI) "Picks and Shovels" Trade Isn't Over. Here Are 2 Stocks to Buy for 2026. - Nasdaq - February 27th, 2026 [February 27th, 2026]
- The Impact of Artificial Intelligence on Violence Against Women and Girls - Stimson Center - February 27th, 2026 [February 27th, 2026]
- 6 Risks From Using Artificial Intelligence To Predict Stock & Investment Performance - The National Law Review - February 27th, 2026 [February 27th, 2026]
- The Dark Side of AI: What Could Artificial Intelligence Become in 100 Years? - vocal.media - February 27th, 2026 [February 27th, 2026]
- The artificial intelligence enters the sports core of the Visma team - Brujulabike.com - February 27th, 2026 [February 27th, 2026]
- AURAK Launches Advanced Technology and Artificial Intelligence Center to Drive Regional Innovation - TechAfrica News - February 27th, 2026 [February 27th, 2026]
- What The Tech: How artificial intelligence is impacting your utility bill - WAKA 8 - February 27th, 2026 [February 27th, 2026]
- Prediction: These Artificial Intelligence (AI) Stocks Will Be the Surprise Winners of the Software Sell-Off in 2026 - The Motley Fool - February 27th, 2026 [February 27th, 2026]
- Prediction: These Artificial Intelligence (AI) Stocks Will Be the Surprise Winners of the Software Sell-Off in 2026 - Nasdaq - February 27th, 2026 [February 27th, 2026]
- University of Toledo Athletics Embraces Artificial Intelligence - UToledo News - February 27th, 2026 [February 27th, 2026]
- Globee Awards for Artificial Intelligence Invites Industry Experts Worldwide to Join the Global AI Judging Panel - PR Newswire - February 27th, 2026 [February 27th, 2026]
- The Artificial Intelligence (AI) Inference Market Could Reach $255 Billion by 2030. These Stocks Are Best Positioned to Win. - The Motley Fool - February 26th, 2026 [February 26th, 2026]
- Artificial Intelligence (AI) Supported Decision-Making in Intensive Care Units: Implications for Nursing and Medical Practice - Cureus - February 26th, 2026 [February 26th, 2026]
- The unprecedented link between quantum physics and artificial intelligence - The Brighter Side of News - February 26th, 2026 [February 26th, 2026]
- Bridging Artificial Intelligence and Precision Oncology for Cancer Care - Targeted Oncology - February 26th, 2026 [February 26th, 2026]
- Legislative Approaches to Artificial Intelligence Advisory Bodies - Kansas Health Institute - February 26th, 2026 [February 26th, 2026]
- The Museum and Heritage Studies program hosts discussion on artificial intelligence and museums - The Johns Hopkins News-Letter - February 26th, 2026 [February 26th, 2026]
- Free Bananas: Artificial Intelligence and Genuine Concerns - The Gabber Newspaper - February 26th, 2026 [February 26th, 2026]
- NVIDIA CEO says artificial intelligence boom is just getting started: 'AI is going to be everywhere' - Fox Business - February 26th, 2026 [February 26th, 2026]