LSE leads the way with new AI Management course – The London School of Economics and Political Science
Please find a Q&A with Dr Aaron Cheng about the new course below:
Can you tell me about the course and the content?
The course title is Managing Artificial Intelligence. As you can tell, its a human-centric approach to AI. I proposed this course as we have seen many courses at our and other Schools worldwide focusing on the technical capability of big data and AI. They help students see the potential of this technology rather than give a hands-on managerial perspective and guidelines for how we manage AI.
For the course, we have 10 lectures to cover both the technicality and management of AI, as well as the social and ethical considerations; balanced to give students different perspectives on AI.
The course is supplemented with nine seminars so students can be exposed to, and engage in, the real-world managerial practices of AI. Among them, we have three case study sessions to cover product development, human-in-the-loop, business model, and global strategy of AI applications in various contexts, such as social media, healthcare, and telecommunication. So its a fascinating line-up of teaching cases to show that AI is real and managing AI is now the priority of many organisations, not something we are envisioning and predicting for the future.
We also have an interesting debate on generative AI, the newest form of AI that can automatically generate content for people to use. We have seen lots of applications around it (e.g., ChatGPT) nowadays. In one of the seminars, students were assigned to five roles employer, university, teachers, students and the AI vendor, and debated the role of this technology in higher education. We wanted to see what kind of issues emerged in this ecosystem, and we did have interesting conversations when students walked in the shoes of different roles. This debate also yielded some regulatory implications for how AI should be managed in the higher education context.
The most exciting task for students is the team project on AI management. Student teams develop present, progress their projects in four seminars by incorporating what they learned in the lectures into their AI projects. Most of the teams start with a pressing business or societal challenge and then develop their start-ups around an AI solution.
Some of the students looked at whether journalism or public relations work can be fully automated and in the end they decided not. One of the teams looks at how predictive analytics can be used to assist university students and teachers to book spaces and make appointments. As you can tell, all of these projects are innovative and can be brought to the market for real, so the students are very excited about that.
Overall, we find that students love the course. Their course learning went along with the rapid changes in the field of AI, especially in the past several months since the beginning of ChatGPT, and many of the technology companies raced against each other to push innovations forward on a daily basis. The field is fascinating, although it creates course design challenges for us to keep up.
Is the course designed for students working for companies coming up with AI projects?
It can be for students who wish to work in any sector that is now embracing this technology. Its important to note that although we need IT developers and data scientists to create AI and data-driven solutions, we need more skilled professionals who know both technology and management to diffuse such innovations.
These professionals are often called business analysts and managers at different levels in an organization who can lead the digital transformation, and they often play a role as middlemen to connect the supply and demand of AI and analytics solutions. Statistics from McKinsey Global Institute showed ten times more of a shortage for managers and analysts who can use their know-how of big data and AI for effective decision-making than that for data scientists or machine learning (ML) engineers who are mainly specialised in programming.
To meet the demand for managerial talents in AI, my course does not focus on teaching students how to design technology but more on how to manage it and lead digital transformation with AI.
It's also important to mention the programme that hosts this course Management Information Systems and Digital Innovation (MISDI) a flagship masters programme for the Information Systems and Innovation Group (ISIG) in the Department of Management (DoM). The faculty expertise in ISIG and course offerings in MISDI are on connecting the technology know-what with business and management know-how to give students an edge and knowledge with this connection.
This is also a student demand driven course. Over the past several years, students in MISDI and other programmes in DoM have developed strong interest in AI issues, and many used topics in AI management for their coursework and dissertations. However, we did not have a specialised course for it.
In other departments at LSE like statistics, there are very good AI and ML courses, but most of them are from the perspectives of statisticians or computer scientists. Since 2021, we have had an LSE100 course how to control AI, which is very well-designed from a social science perspective but only for undergraduate students.
To better meet the needs of masters students studying AI management, we have launched this new course in MISDI to integrate multiple perspectives of AI, focus on the managerial considerations, and give a comprehensive and critical treatment of the automation and augmentation roles of AI for individuals, organizations, and the society at large.
Is the course designed for people interested in business side of AI?
I would say so but want to stress that its a more balanced course that also attracts students whose interests may be beyond business. Another thing important to mention is that the course is situated in a polarised public discourse with diverse views toward AI.
We have seen two camps one camp is held by those who worry about AI and the social and ethical implications of replacing humans in the workplace. The other is a utopian view of AI by those who only advocate the technical capability of AI to extend the capabilities of humans. The latter obviously has a more positive view of AI but sometimes downplays the existential threats for humans themselves especially when AI intensifies inequality among people who do not have the knowledge or skills to manage it.
These two camps are very big now but heavily segregated. I feel that they do not talk to each other in a very productive way, as they often debate using distinct language systems. I believe it is much needed in contemporary society to have effective communication between these camps, and people should know the underlying logic and assumptions of these two camps before they develop beliefs and actions about AI. It should be so, especially for current and future leaders in the private and public sectors. They really need to gain a deep understanding of the potential, promise and perils of AI. They also need to have a sober view of AI hopes and hypes claimed by the two camps.
I hope this course can plant seeds in the deep heart of these students; so, when they develop professional careers as business leaders and social planners, they know what AI is and, more importantly, they take the responsibility to manage AI for a better future for humanity. At the end of the day, we should be able to create strong AI but also create our own humanity and achieve shared prosperity with AI. This is the overarching idea of the course.
What makes this course unique and different?
Let me talk about similar courses and the difference my course makes in AI management education.
I have attended the biggest IT Teaching Workshop in my field (Information Systems) almost every year for the last five years. In the Workshop, teachers from most universities in the United States and Europe present their courses about big data and data analytics, yet I have not seen many specialist courses on AI.
Of course, in the Computer Science community, there are many popular courses about machine learning and data science, but they rarely say that these are AI courses. It is important to note that the concept of AI is not just technical but socio-technical. We need to study and teach the nature and implications of AI by examining its technical properties and also its social contexts. As far as I know, few courses have struck such a balance.
One reason, that most courses focus on the technicality of AI, is obvious. STEM jobs are much better paid than a lot of others. Preparing students for such jobs would help increase the popularity of universities, which further encourages the offering of technical AI or data science courses.
Leading the social science approach in higher education, LSE has its strength in cultivating leaders who can think and navigate social changes, especially the current transformational change led by AI. As such, we offer this new LSE course to situate the debate on AI in the academic and public discourse and approach AI education in a more comprehensive and critical way. We start with the history of AI, we discuss the role of data in making AI, and we unpack the black box of algorithms and issues involved (e.g., opacity, bias, interpretability).
Then we walk students through the socio-technical analysis of AI management at different levels: On the individual level, we assess the role of humans in the loop and when and how human judgment needs to be exercised in designing and using AI. On the organizational level, we analyse the business model, operations, and innovations with and governance of AI. On the societal level, we discuss the ethical concerns and regulatory efforts on managing AI for good. As you can tell, with this approach to AI, students start to think about and raise their own critical questions about AI management in the digital economy.
What made you personally think this course was really needed?
I would like to start with my educational background and then my reading and thinking about AI in the past decade to answer this question.
Starting from my college education 15 years ago, I was in the same discipline, management information systems, and initially, my training was technical and particularly computer science oriented. Then my understanding of technology deepened after I moved to my masters programme and was exposed to a more behavioural perspective on how people interact with technology. Later my PhD training in economic analysis of information technology helps me engage in studying the bigger role of technology in businesses and society.
Now I am a researcher and teacher of information systems and innovation, and LSE really broadens my horizon of the social science approach to technology. During my education journey, AI has been with me for many years, albeit more often in the form of algorithms or machine learning techniques.
AI did not catch a lot of my attention, and I am sure for anyone else, until the booming of the AI field especially when deep learning and generative models were developed and used to create powerful applications, such as deep fakes or ChatGPT. People say nowadays that the era of artificial general intelligence is coming, in contrast to the past decades of artificial narrow intelligence (AI can only serve a small set of pre-specified purposes and automate tasks like ordinary software does).
Over time I realised AI has so much potential to change human life in positive ways. At the same time, people worry about the apocalyptic claim that machines are the end of humanity has reached the all-time high. I think its time for us to seriously think and study how to manage AI.
Teaching AI management is an opportunity for me as a researcher to explore with students the socio-technical nature and implications of AI and how we can be more responsible in designing and deploying AI. I am happy that my students have been excited about this course and really engaged in and benefitted from this journey.
Original post:
- How Do You Get to Artificial General Intelligence? Think Lighter - WIRED - November 28th, 2024 [November 28th, 2024]
- How much time do we have before Artificial General Intelligence (AGI) to turns into Artificial Self-preserving - The Times of India - November 5th, 2024 [November 5th, 2024]
- Simuli to Leap Forward in the Trek to Artificial General Intelligence through 2027 Hyperdimensional AI Ecosystem - USA TODAY - November 5th, 2024 [November 5th, 2024]
- Implications of Artificial General Intelligence on National and International Security - Yoshua Bengio - - October 31st, 2024 [October 31st, 2024]
- James Cameron says the reality of artificial general intelligence is 'scarier' than the fiction of it - Business Insider - October 31st, 2024 [October 31st, 2024]
- James Cameron says the reality of artificial general intelligence is 'scarier' than the fiction of it - MSN - October 31st, 2024 [October 31st, 2024]
- Bot fresh hell is this?: Inside the rise of Artificial General Intelligence or AGI - MSN - October 31st, 2024 [October 31st, 2024]
- Artificial General Intelligence (AGI) Market to Reach $26.9 Billion by 2031 As Revealed In New Report - WhaTech - September 26th, 2024 [September 26th, 2024]
- 19 jobs artificial general intelligence (AGI) may replace and 10 jobs it could create - MSN - September 26th, 2024 [September 26th, 2024]
- Paige Appoints New Leadership to Further Drive Innovation, Bring Artificial General Intelligence to Pathology, and Expand Access to AI Applications -... - August 16th, 2024 [August 16th, 2024]
- Artificial General Intelligence, If Attained, Will Be the Greatest Invention of All Time - JD Supra - August 11th, 2024 [August 11th, 2024]
- OpenAI Touts New AI Safety Research. Critics Say Its a Good Step, but Not Enough - WIRED - July 22nd, 2024 [July 22nd, 2024]
- OpenAIs Project Strawberry Said to Be Building AI That Reasons and Does Deep Research - Singularity Hub - July 22nd, 2024 [July 22nd, 2024]
- One of the Best Ways to Invest in AI Is Dont - InvestorPlace - July 22nd, 2024 [July 22nd, 2024]
- OpenAI is plagued by safety concerns - The Verge - July 17th, 2024 [July 17th, 2024]
- OpenAI reportedly nears breakthrough with reasoning AI, reveals progress framework - Ars Technica - July 17th, 2024 [July 17th, 2024]
- ChatGPT maker OpenAI now has a scale to rank its AI - ReadWrite - July 17th, 2024 [July 17th, 2024]
- Heres how OpenAI will determine how powerful its AI systems are - The Verge - July 17th, 2024 [July 17th, 2024]
- OpenAI may be working on AI that can perform research without human help which should go fine - TechRadar - July 17th, 2024 [July 17th, 2024]
- OpenAI has a new scale for measuring how smart their AI models are becoming which is not as comforting as it should be - TechRadar - July 17th, 2024 [July 17th, 2024]
- OpenAI says there are 5 'levels' for AI to reach human intelligence it's already almost at level 2 - Quartz - July 17th, 2024 [July 17th, 2024]
- AIs Bizarro World, were marching towards AGI while carbon emissions soar - Fortune - July 17th, 2024 [July 17th, 2024]
- AI News Today July 15, 2024 - The Dales Report - July 17th, 2024 [July 17th, 2024]
- The Evolution Of Artificial Intelligence: From Basic AI To ASI - Welcome2TheBronx - July 17th, 2024 [July 17th, 2024]
- What Elon Musk and Ilya Sutskever Feared About OpenAI Is Becoming Reality - Observer - July 17th, 2024 [July 17th, 2024]
- Companies are losing faith in AI, and AI is losing money - Android Headlines - July 17th, 2024 [July 17th, 2024]
- AGI isn't here (yet): How to make informed, strategic decisions in the meantime - VentureBeat - June 16th, 2024 [June 16th, 2024]
- Apple's AI Privacy Measures, Elon Musk's Robot Prediction, And More: This Week In Artificial Intelligence - Alphabet ... - Benzinga - June 16th, 2024 [June 16th, 2024]
- AGI and jumping to the New Inference Market S-Curve - CMSWire - June 16th, 2024 [June 16th, 2024]
- Apple's big AI announcements were all about AI 'for the rest of us'Google, Meta, Amazon and, yes, OpenAI should ... - Fortune - June 16th, 2024 [June 16th, 2024]
- Elon Musk Withdraws His Lawsuit Against OpenAI and Sam Altman - The New York Times - June 16th, 2024 [June 16th, 2024]
- Staying Ahead of the AI Train - ATD - June 16th, 2024 [June 16th, 2024]
- OpenAI disbands its AI risk mitigation team - - May 20th, 2024 [May 20th, 2024]
- BEYOND LOCAL: 'Noise' in the machine: Human differences in judgment lead to problems for AI - The Longmont Leader - May 20th, 2024 [May 20th, 2024]
- Machine Learning Researcher Links OpenAI to Drug-Fueled Sex Parties - Futurism - May 20th, 2024 [May 20th, 2024]
- What Is AI? How Artificial Intelligence Works (2024) - Shopify - May 20th, 2024 [May 20th, 2024]
- Vitalik Buterin says OpenAI's GPT-4 has passed the Turing test - Cointelegraph - May 20th, 2024 [May 20th, 2024]
- "I lost trust": Why the OpenAI team in charge of safeguarding humanity imploded - Vox.com - May 18th, 2024 [May 18th, 2024]
- 63% of surveyed Americans want government legislation to prevent super intelligent AI from ever being achieved - PC Gamer - May 18th, 2024 [May 18th, 2024]
- Top OpenAI researcher resigns, saying company prioritized 'shiny products' over AI safety - Fortune - May 18th, 2024 [May 18th, 2024]
- The revolution in artificial intelligence and artificial general intelligence - Washington Times - May 18th, 2024 [May 18th, 2024]
- OpenAI disbands team devoted to artificial intelligence risks - Yahoo! Voices - May 18th, 2024 [May 18th, 2024]
- OpenAI disbands safety team focused on risk of artificial intelligence causing 'human extinction' - New York Post - May 18th, 2024 [May 18th, 2024]
- OpenAI disbands team devoted to artificial intelligence risks - Port Lavaca Wave - May 18th, 2024 [May 18th, 2024]
- OpenAI disbands team devoted to artificial intelligence risks - Moore County News Press - May 18th, 2024 [May 18th, 2024]
- Generative AI Is Totally Shameless. I Want to Be It - WIRED - May 18th, 2024 [May 18th, 2024]
- OpenAI researcher resigns, claiming safety has taken a backseat to shiny products - The Verge - May 18th, 2024 [May 18th, 2024]
- Most of Surveyed Americans Do Not Want Super Intelligent AI - 80.lv - May 18th, 2024 [May 18th, 2024]
- A former OpenAI leader says safety has 'taken a backseat to shiny products' at the AI company - Winnipeg Free Press - May 18th, 2024 [May 18th, 2024]
- DeepMind CEO says Google to spend more than $100B on AGI despite hype - Cointelegraph - April 20th, 2024 [April 20th, 2024]
- Congressional panel outlines five guardrails for AI use in House - FedScoop - April 20th, 2024 [April 20th, 2024]
- The Potential and Perils of Advanced Artificial General Intelligence - elblog.pl - April 20th, 2024 [April 20th, 2024]
- DeepMind Head: Google AI Spending Could Exceed $100 Billion - PYMNTS.com - April 20th, 2024 [April 20th, 2024]
- Say hi to Tong Tong, world's first AGI child-image figure - ecns - April 20th, 2024 [April 20th, 2024]
- Silicon Scholars: AI and The Muslim Ummah - IslamiCity - April 20th, 2024 [April 20th, 2024]
- AI stocks aren't like the dot-com bubble. Here's why - Quartz - April 20th, 2024 [April 20th, 2024]
- AI vs. AGI: The Race for Performance, Battling the Cost? for NASDAQ:GOOG by Moshkelgosha - TradingView - April 20th, 2024 [April 20th, 2024]
- We've Been Here Before: AI Promised Humanlike Machines In 1958 - The Good Men Project - April 20th, 2024 [April 20th, 2024]
- Google will spend more than $100 billion on AI, exec says - Quartz - April 20th, 2024 [April 20th, 2024]
- Tech companies want to build artificial general intelligence. But who decides when AGI is attained? - The Bakersfield Californian - April 8th, 2024 [April 8th, 2024]
- Tech companies want to build artificial general intelligence. But who decides when AGI is attained? - The Caledonian-Record - April 8th, 2024 [April 8th, 2024]
- What is AGI and how is it different from AI? - ReadWrite - April 8th, 2024 [April 8th, 2024]
- Artificial intelligence in healthcare: defining the most common terms - HealthITAnalytics.com - April 8th, 2024 [April 8th, 2024]
- We're Focusing on the Wrong Kind of AI Apocalypse - TIME - April 8th, 2024 [April 8th, 2024]
- Xi Jinping's vision in supporting the artificial intelligence at home and abroad - Modern Diplomacy - April 8th, 2024 [April 8th, 2024]
- As 'The Matrix' turns 25, the chilling artificial intelligence (AI) projection at its core isn't as outlandish as it once seemed - TechRadar - April 8th, 2024 [April 8th, 2024]
- AI & robotics briefing: Why superintelligent AI won't sneak up on us - Nature.com - January 10th, 2024 [January 10th, 2024]
- Get Ready for the Great AI Disappointment - WIRED - January 10th, 2024 [January 10th, 2024]
- Part 3 Capitalism in the Age of Artificial General Intelligence (AGI) - Medium - January 10th, 2024 [January 10th, 2024]
- Artificial General Intelligence (AGI): what it is and why its discovery can change the world - Medium - January 10th, 2024 [January 10th, 2024]
- Exploring the Path to Artificial General Intelligence - Medriva - January 10th, 2024 [January 10th, 2024]
- The Acceleration Towards Artificial General Intelligence (AGI) and Its Implications - Medriva - January 10th, 2024 [January 10th, 2024]
- OpenAI Warns: "AGI Is Coming" - Do we have a reason to worry? - Medium - January 10th, 2024 [January 10th, 2024]
- The fight over ethics intensifies as artificial intelligence quickly changes the world - 9 & 10 News - January 10th, 2024 [January 10th, 2024]
- AI as the Third Window into Humanity: Understanding Human Behavior and Emotions - Medriva - January 10th, 2024 [January 10th, 2024]
- Artificial General Intelligence (AGI) in Radiation Oncology: Transformative Technology - Medriva - January 10th, 2024 [January 10th, 2024]
- Exploring the Potential of AGI: Opportunities and Challenges - Medium - January 10th, 2024 [January 10th, 2024]
- Full-Spectrum Cognitive Development Incorporating AI for Evolution and Collective Intelligence - Medriva - January 10th, 2024 [January 10th, 2024]
- Artificial Superintelligence - Understanding a Future Tech that Will Change the World! - MobileAppDaily - January 10th, 2024 [January 10th, 2024]
- Title: AI Unveiled: Exploring the Realm of Artificial Intelligence - Medium - January 10th, 2024 [January 10th, 2024]