Archive for the ‘Artificial General Intelligence’ Category

What will AI do to question-based inquiry? (opinion) – Inside Higher Ed

Twemoji (question mark image) and Just_Super from Getty Images Signature (AI photograph).

Since the release of ChatGPT in late 2022, many questions have been raised about the impact of generative artificial intelligence on higher education, particularly its potential to automate the processes of research and writing. Will ChatGPT end the college essay or prompt professors, as John Warner hopes, to revise our pedagogical ends in assigning writing? At Washington College, our Cromwell Center for Teaching and Learning organized a series of discussions this past spring motivated by questions: What is machine learning doing in education? How might we define its use in the classroom? How should we value it in our programs and address it in our policies? True to the heuristic nature of inquiry in the liberal arts and sciences, this series generated robust but unfinished conversations that elicited some initial answers and many more questions.

And yet, as we continue to raise important questions about AI while adapting to it, surprisingly few questions have been asked of AI, literally. I have come to notice that the dominant grammatical mood in which AI chatbot conversations are conducted or prompted is the imperative. As emphasized by the new conductors of prompt engineering, the skillful eliciting of output from the AI model that has emerged as a lucrative career opportunity, chatbots respond best to explicit commands. The best way to ask AI a question, it seems, is to stop asking it questions.

Writing in The New York Times On Tech: AI newsletter, Brian X. Chen defines golden prompts as the art of asking questions that will generate the most helpful answers. However, Chens prompts are all commands (such as act as if you are an expert in X), no interrogatives, and not even a please recommended for the new art of computational conversation. Nearly every recommendation I have seen from AI developers perpetuates this drifting of question-based inquiry into blunt command. Consider prominent AI adopter and Wharton School professor Ethan Mollick. Observing the tendency of students to get poor results from chatbot inquiry because they ask detailed questions, Mollick proposes a simple solution. Instead of guiding or instructing the chatbot with questions, Mollick writes, tell it what you want it to do and, a point made through an unnerving analogy, boss it like you would an intern.

Most Popular

Why should it matter that our newest writing and research technologies are rapidly shifting the modes and moods of inquiry from interrogatives to imperatives? Surely many seeking information from an internet search no longer phrase inquiry as a question. But I would agree with Janet H. Murray that new digital environments for AI-assisted inquiry do not merely add to existing modes of research, but instead establish new expressive forms with different characteristics, new affordances and constraints. First among these for Murray, writing in Hamlet on the Holodeck (MIT Press, 1998), is the procedural or algorithmic basis of digital communication. A problem-solving procedure, an algorithm follows precise rules and processes that result in a specific answer, a predictable and executable outcome.

Algorithmic procedure might provide a beneficial substructure for fictional narrative, driving a reader (or player, in the case of a video game) toward the resolution of a complex and highly determined plot. But algorithmic rules could also pose a substantial constraint for students learning to write an essay, where more open-ended heuristics, or brief, general rules of thumb and adaptive commonplaces, are more appropriate for composition that aims for context-contingent persuasion, plausibility not certainty.

Drawing on lessons from cognitive psychology, educator Mike Rose long ago addressed the problem of writers block in these very terms of algorithm and heuristic. Process and procedure are necessary for writing, but when writing is presented algorithmically, as a rigid set of rules to execute, developing writers can become cognitively blocked. Perhaps you remember, as I do, struggling to reconcile initial attempts at drafting an essay with a lengthy, detailed Harvard outline worked out entirely in advance. Roses seminal advice from 1980, that educators present learning prompts more heuristically and less absolutely, remains timely and appropriate for the new algorithms of AI.

In turning questions into commands, while still referring to them as questions, we perpetuate cognitive blocking while inducing, apparently, intellectual idiocy. (Ask better questions by not asking them?) We transform key rhetorical figures of inquiry like question and conversation into dead metaphor. Consider what is happening to the word prompt. Students know the word, at least for now, as a term of art in writing pedagogy: the guidelines for an assignment in which instructors identify the purpose, context and audience for the writing, preparing the grounds for the type of question-based inquiry the students will be pursuing. In The Craft of Research (University of Chicago Press), the late Wayne Booth and his colleagues refer to these heuristic guidelines as helping students make rhetorically significant choices.

Reaching back to classical rhetoric, heuristics such as Aristotles topics of invention or the four questions of stasis theory provide adaptive and responsive ideas and structures toward possible responses, not determined answers. When motivating questions are displaced by commands, AI-generated inquiry risks rhetorical unresponsiveness. When answers to unasked questions are removed from audience and context, the opaque information retrieved is no longer in need of a writer. The user can command not just the answer but also its arrangement, style and delivery. Since inquiry is offloaded to AI, why not the entire composition?

As educators we should worry, along with Nicholas Carr in The Glass Cage (W.W. Norton, 2014), about the cognitive de-skilling that attends the automation of intellectual inquiry. Writing before ChatGPT, Carr was already thinking about the ways that algorithmic grading programs might drift into algorithmic writing and thinking. As it becomes more efficient to pursue question-based inquiry without asking questions, we potentially lose more than the skill of posing questions. We potentially lose the means and the motivation for the inquiry. It is hard to be curious about ideas when information can be commanded.

As we continue to raise questions about AI, we need not resist all things algorithm. After all, we have been working and teaching with rule-based procedures long before the computer. But we can choose, as educators, to use emerging algorithmic tools more heuristically and with more rhetorically significant purpose. Rhetorically speaking, the best heuristics are simple concepts that can be applied to interrogate and sort through complex ideas, adapting prior knowledge to new contexts: What is X? Who values it? How might X be viewed from alternative perspectives? Such is inquiry, which, like education, can be guided but hardly commanded. If we are going to use AI tools to find and shape answers to our questions, we should generate and pose the questions.

Sean Ross Meehan, Ph.D., is a professor of English and director of writing and co-director of the Cromwell Center for Teaching and Learning at Washington College.

Original post:

What will AI do to question-based inquiry? (opinion) - Inside Higher Ed

The Department of State’s pilot project approach to AI adoption – FedScoop

With the release of ChatGPT and other large language models, generative AI has clearly caught the publics attention.This new awareness, particularly in the public sector, of the tremendous power of artificial intelligence is a net good. However, excessive focus on chatbot-style AI capabilities risks overshadowing applicationsthatare both innovative and practicaland seek to serve the public through increased government transparency.

Within government, there are existing projects that are more maturethan AI chatbotsand are immediately ready to deliver more efficient government operations.Through a partnershipbetweenthree offices, the Department of State is seeking to automate the cumbersome process of document declassification and prepare for the large volume of electronic records that will need to be reviewed in the next several years.The Bureau of Administrations Office of Global Information Services (A/GIS), the Office of Management Strategy and Solutions Center for Analytics (M/SS CfA), and the Bureau of Information Resource Managements (IRM) Messaging Systems Officehave piloted and are now moving toward production-scale deployment of AI to augmentanintensive, manual review processthat normally necessitates a page-by-page human review of 25-year-old classified electronic records. The pilot focused mainly on cable messages which are communications between Washington and the departments overseas posts.

The 25-year declassification review process entails a manual review of electronic, classified records at the confidential and secret levelsin the year that their protection period elapses; inmanycases, 25 years after original classification.Manual review has historically been the only way to determineif information can be declassified for eventual public release, or exempt from declassification to protect information critical to our nations security.

However, manual review is a time-intensive process.A team ofabout sixreviewers works year-round to review classified cables and must use a triage method to prioritize reviewing the cables most likely to require exemption from automatic declassification.In most years, they are unable to review every one of the between 112,000 and 133,000electroniccables under review from 1995-1997.The risk ofnot being able to review each document for anysensitive material is exacerbated by the increasing volume of documents.

Thismanual review strategy is quickly becoming unsustainable.Around 100,000 classified cables were created each year between 1995 and 2003.The number of cablescreated in 2006thatwill require review grew to over 650,000and remains at that volume for the following years.Whileemails are currently an insignificant portion of25-year declassificationreviews, the number of classified emails doubles every two years after 2001, rising to over 12 million emails in 2018.To get ahead of this challenge, we have turned to artificial intelligence.

Considering AI is still a cutting-edge innovation with uncertainty and risk, our approach started with a pilot to test the impact of the process on a small scale. We trained a model, using human declassification decisions made in 2020 and 2021 on cables classified confidential and secret in 1995 and 1996, to recreate those decisions on cables classified in 1997.Over 300,000 classified cables were used for training and testing during the pilot.The pilot took three months and five dedicated data scientists to develop and train a model that matches previous humandeclassificationreview decisions at a rate of over 97percentand with the potential to reduce over 65percentof the existing manual workload.The pilot approach allowed us to consider and plan for three AI risks: lack of human oversight of automated decision-making, the ethics of AI, and overinvestment of time and money on products that arent usable.

The new declassification tool will not replace jobs.The AI-assisted declassification review processrequireshuman reviewers to remain part of the decision-makingprocess.During the pilot and the subsequent weeks of work to put the model into production, reviewers were consistently consulted and their feedback integrated into the automated decision process.This combination of technological review with human review and insight is critical to the success of the model.The model cannot make a decision with confidence on every cable, necessitating thathumanreviewers make a decision as they normally would on a portion of all cables.Reviewers also conduct quality control.A small, yet significant, percentage of cables with automated confident decisions are given to reviewers for confirmation.If enough of the AI-generated decisions are contradicted during the quality control check, the model can be re-trained to consider the information that it missed and integrate reviewer feedback.This feedback is critical to sustaining the model in the long term and for considering evolving geopolitical contexts.During the pilot, we determined that additional input from the Departments Office of the Historian (FSI/OH) could help strengthen future declassification review models by providing input about world events during the years of records being reviewed.

There are ethical concerns that innovating with AI will lead to governing by algorithm.Although the descriptive AI used in our pilot does not construct narrative conversations like large language models (LLMs) and ChatGPT, it is designed to make decisions by learning previous human inputs.The approximation of human thought raises concerns of ethical government when it replaces what is considered sensitive and specialized experience.In our implementation, AI is a tool that works in concert with humans for validation, oversight, and process refinement.Incorporating AI tools into our workflows requires continually addressing the ethical dimensions of automated decision-making.

This project also saves money potentially millions of dollars worth of personnel hours.Innovation for the sake of being innovative can result in overinvestment in dedicated staff and technology, which is unable to sustain itself or end up in long-term cost savings.Because we tested our short-term pilot within the confines of existing technology, when we forecast the workload reduction across the next ten years of reviews, we anticipate an almost $8 million savings on labor costs.Those savings can be applied to piloting AI solutions for other governmental programsmanaging increased volumes of data and records with finite resources, such asinformation access requests for electronic recordsand Freedom of Information Actrequests.

Rarely in government do we prioritize the time to try, and potentially fail, in the interest of innovation and efficiency.The small-scale declassification pilot allowed for a proof of concept before committing to sweeping changes.In ournextphase,the Department isbringing the pilot to scaleso that the AI technology is integrated with existing Department technology as part of the routine declassification process.

Federal interest in AI use cases has exploded in only the last few months, with many big and bold ideas being debated.While positive, these debates should not detract from use cases like this, which can rapidly improve government efficiencyand transparency through the release of information to the public.Furthermore, the lessons learned from this use case having clear metrics of success upfront, investing in data quality and structure, starting with asmall-scalepilot can also be applied to future generative AI use cases as well.AIs general-purpose capabilities mean that it will eventually be a part of almost all aspects of how the government operates, from budget and HR to strategy and policy making.We have an opportunity to help shape how the government modernizes its programs and services within and across federal agencies to improve services for the public in ways previously unimagined or possible.

Matthew Graviss is chief data and AI officer at the Department of State, and director of the agencys Center for Analytics. Eric Stein is the deputy assistant secretary for the office of Global Information Services at States Bureau of Administration. Samuel Stehle is a data scientist within the Center for Analytics.

Originally posted here:

The Department of State's pilot project approach to AI adoption - FedScoop

Anthropic and SK Telecom team up to build AI model for telcos – Tech Monitor

AI lab Anthropic is working with South Koreas largest telecom company SK Telecom on a new global telecommunication-focused large language model (LLM). The partnership also saw SKT invest $100m in the US-based developer.

Anthropic is one of a handful of AI labs focused on building foundation models, chatbots and working towards next generation artificial general intelligence (AGI). Founded by former OpenAI executives, it released a new version of its Claude AI model last month.

Google is one of the biggest investors in Anthropic, extending its investment in May by joining a $450m funding round in the start-up. This round also included investment from Salesforce Ventures, Zoom Ventures and Spark Capital bringing Anthropics value to $4.1bn.

The new funding from SK Telecom builds on its own investment as part of that earlier Series C funding round, but the company has not confirmed the value of its total stake in Anthropic.

The agreement will see the AI lab fine-tune Claude, with support from SK, to ensure it meets the needs of global telecoms companies. This will include adding specific skills for customer service, marketing, sales and consumer application targeted to the telecom industry use case.

Jared Kaplan, co-founder and chief science officer for Anthropic will lead the projects, including setting the product roadmap and direction of customisation. It will then be available on a new Telco AI Platform in development by the Global Telco AI Alliance. This is a partnership between networks including SKT, Deutsche Telekom, e& and Singtel announced earlier this year.

SKT has incredible ambitions to use AI to transform the telco industry, said Dario Amodei, co-founder and CEO of Anthropic. Were excited to combine our AI expertise with SKTs industry knowledge to build a LLM that is customised for telcos.

The Global Telco AI Alliance confirmed the new AI platform last month to serve as the core foundation for new AI services. This includes creation of digital assistants, improving existing telecom services and new super apps to offer a range of AI-powered services.

Each of the companies involved have appointed a c-suite level representative to coordinate the overall collaboration. This will include working out ways to utilise generative AI within the sector and work across an open-vendor approach to technology.

Ryu Young-sang, CEO of SK Telecom said the two companies would work to promote global AI innovation. By combining our Korean language-based LLM with Anthropics strong AI capabilities, we expect to create synergy and gain leadership in the AI ecosystem together with our global telco partners.

Outside of this alliance companies like Vodafone and BT are also actively exploring ways to utilise generative AI. Vodafone is using it with IoT devices to monitor and reduce network emissions in the UK. They are using their own in-house big data analytics platform, alongside 11,500 UK radio base stations to look for consumption anomalies.

More here:

Anthropic and SK Telecom team up to build AI model for telcos - Tech Monitor

Derry City & Strabane – Explore the Future of Education and … – Derry City and Strabane District Council

15 August 2023

GenAIEdu 2023, National Conference on Generative Artificial Intelligence in Education will take place at the Derry ~ Londonderry campus of Ulster University from the 11th-13th September 2023.

The conference, hosted by the School of Computing, Engineering and Intelligent Systems, will explore the cutting-edge world of Generative Artificial Intelligence in an educational context.

Whether you are an educator, researcher, teacher, student or industry professional, this conference is your gateway to understanding how generative AI is revolutionising the way we learn, teach and assess.

You will learn about cutting-edge technologies and large language models such as ChatGPT, Bard and Claude and how these tools and natural language processing capabilities enable personalized and interactive learning experiences through a series of keynotes, talks, discussion panels and hands on workshops, demonstrations and networking events with leading academics, researchers and industry experts in this area.

Michael Callaghan, Conference chair and Reader in the School said:

Generative AI offers unprecedented opportunities and challenges for transformation in education which we must navigate carefully. The GenAIEdu conference will explore the impact of Generative AI on students, and the evolving role of educators and institutions in this technologically enriched and rapidly evolving landscape.

Professor Jim Harkin, Head of the School of Computing, Engineering, and Intelligent Systems, added:

The School is delighted to host this conference and explore the use of Generative AI both in education and society in general. We are uniquely positioned on the Derry~Londonderry campus to be at the forefront of shaping the next generation of computing and AI graduates with our offering of undergraduate degree courses in Computer Science and Artificial Intelligence.

Conference registration

More details on registration, conference hotel rates, practicalities of travel and the content of the talks are available on the conference website here.

https://www.ulster.ac.uk/conference/genaiedu-2023

Register now as places are limited.

https://store.ulster.ac.uk/product-catalogue/faculty-of-computing-engineering/school-of-computing-and-intelligent-systems/genaiedu-2023-conference-registration

All registration queries should be sent to [emailprotected] with the email subject heading GenAIEdu Registration Query.

Follow us on Twitter/X

https://twitter.com/GenAiEdu/

Confirmed speakers and presenters include

Sue Attewell - Co-lead the National centre for AI at JISC

https://www.linkedin.com/in/sueattewell/

Dr Cris Bloomfield - Education Architect Microsoft

https://www.linkedin.com/in/crispinbloomfield/

Michael Callaghan - Reader, Ulster University

https://www.linkedin.com/in/michael-callaghan-48977316/

Manjinder Kainth - CEO & Co-founder of Graide

https://www.linkedin.com/in/manjinderkainth/

Peter Kilcoyne - TeacherMatic, Director at Transform Education

https://www.linkedin.com/in/peter-kilcoyne-87aa7541/

Martin Neale - Founder and CEO ICS, UK's first Microsoft AI Inner Circle Partner

https://www.linkedin.com/in/martinneale/

JJ Quinlan - Lecturer and Researcher for Creative Media DkIT

https://www.linkedin.com/in/jj-quinlan-a8695a1b/

Emil Reisser-Weston - Open eLMS Edtech Futurist and Managing Director

https://www.linkedin.com/in/emilrw/

Professor Mairad Pratschke - Professor and Chair in Digital Education, University of Manchester

https://www.linkedin.com/in/maireadpratschke/

Dr Muskaan Singh Lecturer at Ulster University

https://www.linkedin.com/in/muskaan-singh-73b316197/

View post:

Derry City & Strabane - Explore the Future of Education and ... - Derry City and Strabane District Council

Ethical Considerations of Using AI for Academic Purposes – Unite.AI

AI-driven services are revolutionizing numerous sectors, and academia is no exception. But as with any groundbreaking technology, there are ethical considerations to ponder. Why is this discussion vital? Because our approach to education shapes future generations.

At its core, an AI-driven essay service leverages artificial intelligence to craft, enhance, or check essays. These services can offer a range of features, including but not limited to:

Some advanced AI tools can generate entire essays based on given prompts or topics.

AI-driven services can detect and correct grammatical errors, punctuation mistakes, and awkward phrasings in an essay, often more quickly and accurately than traditional spell-checkers. Some AI tools can evaluate the style and tone of an essay, providing feedback on whether the content is formal, informal, positive, negative, or neutral. These services can also suggest improvements in terms of vocabulary, sentence structure, and coherence.

By comparing the content of an essay with vast databases of existing content, these services can identify potential instances of plagiarism.

Some AI tools can help students gather relevant information or data related to their essay topic, streamlining the research process.

With the rapid incorporation of AI in the educational sector, AI essay writer services have become increasingly available.

In today's fast-paced academic environment, every moment counts. Students juggle multiple assignments, extracurricular activities, and personal commitments. AI steps in as a powerful ally, streamlining tasks and cutting down the time spent on repetitive or cumbersome processes. By handling tasks like research, grammar checks, and basic content suggestions, AI tools allow students to manage their time more effectively, focusing on deeper understanding and creativity.

One of the significant advantages of integrating AI in the academic realm is the enhancement of the learning experience. By pinpointing specific weaknesses in students' work, AI-driven tools provide a clear roadmap for improvement. Students can zero in on those areas that truly require attention, ensuring that their efforts are channeled effectively.

The traditional academic feedback loop, which often involves long waits and general comments, is undergoing a revolution thanks to AI. No longer do students have to wait weeks to understand where they went wrong. With instant critiques available at their fingertips, learning becomes a dynamic and swift process. This immediacy not only boosts student engagement but also facilitates rapid iteration and understanding.

When power is amplified by AI's capabilities, the ethical dimensions surrounding its use become even more critical. Here is the crux of the matter: with great power comes great responsibility.

One of the most pressing concerns is the authenticity of the work produced by AI essay writers. If a student submits an essay primarily generated by an AI tool, can we truly say it is the student's original work? This blurring of boundaries between human effort and machine output challenges our traditional understanding of authorship and originality. It raises the question: Are we inadvertently promoting a culture where the process of thinking, analyzing, and creating is outsourced to machines?

Also, the age of AI presents a nuanced form of the age-old problem of plagiarism. Even if AI tools can generate unique content, the shadow of doubt regarding its originality persists. It is not just about lifting content from existing sources; it is about the genesis of the idea itself. And even if technically non-plagiarized, does it uphold the spirit of academic integrity?

While AI has shown remarkable proficiency in various tasks, its reliability remains a topic of debate. Machines operate based on algorithms and data, which might not always capture the nuances and complexities of human thought. Relying solely on AI's judgment could lead to misconceptions and inaccuracies.

Today, data has become the new gold, and so there are plenty of data privacy concerns. As students increasingly turn to online AI tools for academic assistance, they often share personal information, essays, and research. But at what cost? There are growing concerns about how this data is stored, who has access to it, and its potential misuse. Are students inadvertently compromising their privacy in exchange for the convenience AI-driven services offer?

In the realm of academia, the essence of learning is not just about obtaining information but about the originality of thought and the ability to innovate. There is no denying that AI possesses the capability to generate vast amounts of content, often mimicking human-like patterns of writing. However, while it can replicate, it does not necessarily innovate in the way humans do. The human mind draws from experiences, emotions, culture, and a myriad of other factors that AI simply does not possess for now. The nuance, the serendipity, and the sheer unpredictability of human creativity are challenging, if not impossible, for AI to emulate completely. Can a machine truly capture the essence of a eureka moment or the thrill of an unexpected connection?

The primary goal of education, fostered over the years by human essay writers, is not just knowledge accumulation but holistic personal and intellectual development. There is a risk of students becoming passive recipients rather than active learners. By leaning heavily on AI, they might miss out on challenges, mistakes, and subsequent learnings that are instrumental in growth. In bypassing the struggles, are we also bypassing the most significant opportunities for intellectual and personal growth?

Education's cornerstone is the development of critical thinking and analytical skills. However, an over-reliance on AI poses the risk of students outsourcing this crucial aspect of their education. When a machine is tasked with generating content, structuring arguments, or even conducting research, students may find themselves sidestepping the very processes that hone their cognitive abilities. In the long run, is it going to do more harm than good by depriving students of opportunities to think deeply and critically?

The quest for knowledge is as much about the journey as it is about the destination. But when tools like AI offer shortcuts, there is a temptation to skip the learning journey altogether. The adage Easy come, easy go perfectly encapsulates this problem; what is achieved without effort might be lost just as quickly.

The advent of technology in classrooms has undeniably reshaped educational relationships. When it is not just a tool but an AI-driven entity intervening, the roles can change profoundly. The role of educators is undergoing a transformation. Instead of being the primary source of information, educators might find themselves transitioning to the role of mentors. Their primary function may shift from direct teaching to guiding, facilitating, and fostering an environment where students can critically engage with AI-generated content.

As to feedback, it is not just about pointing out mistakes. It is about fostering growth with a human touch. When feedback stems from AI, it might be precise and instant, but it often lacks the nuances and empathy a human educator provides. This potential absence of personal connection can impact the depth and quality of a student's personal and academic growth.

The rapid integration of AI into the educational sector might present challenges, but it is important to remember that every challenge is an opportunity in disguise. By approaching AI's incorporation with forethought and responsibility, we can ensure it becomes a boon rather than a bane.

One of the most immediate steps educational institutions can take is the establishment of clear policies and guidelines regarding AI usage. By setting boundaries on how and when AI tools should be employed, institutions can ensure that the technology is used to complement human educators rather than replace them. This can also safeguard academic integrity and ensure that the essence of learning is not compromised.

In addition, by investing in comprehensive training programs for both educators and students, institutions can reduce the potential for misuse and misunderstanding. Educators can be trained on how to best integrate AI tools into their teaching methodologies, and students can be educated on the ethical considerations and best practices for using AI in their learning processes. Through proper education, we can strike the right balance, harnessing the immense potential of AI while preserving the invaluable human touch in the realm of education.

The intersection of AI and academia is fraught with both promise and pitfalls. While the allure of AI-driven essay writing is undeniable, it is vital to navigate this terrain with a moral compass. The future of education hinges not just on technology but how we choose to wield it.

See original here:

Ethical Considerations of Using AI for Academic Purposes - Unite.AI