The Department of State’s pilot project approach to AI adoption – FedScoop
With the release of ChatGPT and other large language models, generative AI has clearly caught the publics attention.This new awareness, particularly in the public sector, of the tremendous power of artificial intelligence is a net good. However, excessive focus on chatbot-style AI capabilities risks overshadowing applicationsthatare both innovative and practicaland seek to serve the public through increased government transparency.
Within government, there are existing projects that are more maturethan AI chatbotsand are immediately ready to deliver more efficient government operations.Through a partnershipbetweenthree offices, the Department of State is seeking to automate the cumbersome process of document declassification and prepare for the large volume of electronic records that will need to be reviewed in the next several years.The Bureau of Administrations Office of Global Information Services (A/GIS), the Office of Management Strategy and Solutions Center for Analytics (M/SS CfA), and the Bureau of Information Resource Managements (IRM) Messaging Systems Officehave piloted and are now moving toward production-scale deployment of AI to augmentanintensive, manual review processthat normally necessitates a page-by-page human review of 25-year-old classified electronic records. The pilot focused mainly on cable messages which are communications between Washington and the departments overseas posts.
The 25-year declassification review process entails a manual review of electronic, classified records at the confidential and secret levelsin the year that their protection period elapses; inmanycases, 25 years after original classification.Manual review has historically been the only way to determineif information can be declassified for eventual public release, or exempt from declassification to protect information critical to our nations security.
However, manual review is a time-intensive process.A team ofabout sixreviewers works year-round to review classified cables and must use a triage method to prioritize reviewing the cables most likely to require exemption from automatic declassification.In most years, they are unable to review every one of the between 112,000 and 133,000electroniccables under review from 1995-1997.The risk ofnot being able to review each document for anysensitive material is exacerbated by the increasing volume of documents.
Thismanual review strategy is quickly becoming unsustainable.Around 100,000 classified cables were created each year between 1995 and 2003.The number of cablescreated in 2006thatwill require review grew to over 650,000and remains at that volume for the following years.Whileemails are currently an insignificant portion of25-year declassificationreviews, the number of classified emails doubles every two years after 2001, rising to over 12 million emails in 2018.To get ahead of this challenge, we have turned to artificial intelligence.
Considering AI is still a cutting-edge innovation with uncertainty and risk, our approach started with a pilot to test the impact of the process on a small scale. We trained a model, using human declassification decisions made in 2020 and 2021 on cables classified confidential and secret in 1995 and 1996, to recreate those decisions on cables classified in 1997.Over 300,000 classified cables were used for training and testing during the pilot.The pilot took three months and five dedicated data scientists to develop and train a model that matches previous humandeclassificationreview decisions at a rate of over 97percentand with the potential to reduce over 65percentof the existing manual workload.The pilot approach allowed us to consider and plan for three AI risks: lack of human oversight of automated decision-making, the ethics of AI, and overinvestment of time and money on products that arent usable.
The new declassification tool will not replace jobs.The AI-assisted declassification review processrequireshuman reviewers to remain part of the decision-makingprocess.During the pilot and the subsequent weeks of work to put the model into production, reviewers were consistently consulted and their feedback integrated into the automated decision process.This combination of technological review with human review and insight is critical to the success of the model.The model cannot make a decision with confidence on every cable, necessitating thathumanreviewers make a decision as they normally would on a portion of all cables.Reviewers also conduct quality control.A small, yet significant, percentage of cables with automated confident decisions are given to reviewers for confirmation.If enough of the AI-generated decisions are contradicted during the quality control check, the model can be re-trained to consider the information that it missed and integrate reviewer feedback.This feedback is critical to sustaining the model in the long term and for considering evolving geopolitical contexts.During the pilot, we determined that additional input from the Departments Office of the Historian (FSI/OH) could help strengthen future declassification review models by providing input about world events during the years of records being reviewed.
There are ethical concerns that innovating with AI will lead to governing by algorithm.Although the descriptive AI used in our pilot does not construct narrative conversations like large language models (LLMs) and ChatGPT, it is designed to make decisions by learning previous human inputs.The approximation of human thought raises concerns of ethical government when it replaces what is considered sensitive and specialized experience.In our implementation, AI is a tool that works in concert with humans for validation, oversight, and process refinement.Incorporating AI tools into our workflows requires continually addressing the ethical dimensions of automated decision-making.
This project also saves money potentially millions of dollars worth of personnel hours.Innovation for the sake of being innovative can result in overinvestment in dedicated staff and technology, which is unable to sustain itself or end up in long-term cost savings.Because we tested our short-term pilot within the confines of existing technology, when we forecast the workload reduction across the next ten years of reviews, we anticipate an almost $8 million savings on labor costs.Those savings can be applied to piloting AI solutions for other governmental programsmanaging increased volumes of data and records with finite resources, such asinformation access requests for electronic recordsand Freedom of Information Actrequests.
Rarely in government do we prioritize the time to try, and potentially fail, in the interest of innovation and efficiency.The small-scale declassification pilot allowed for a proof of concept before committing to sweeping changes.In ournextphase,the Department isbringing the pilot to scaleso that the AI technology is integrated with existing Department technology as part of the routine declassification process.
Federal interest in AI use cases has exploded in only the last few months, with many big and bold ideas being debated.While positive, these debates should not detract from use cases like this, which can rapidly improve government efficiencyand transparency through the release of information to the public.Furthermore, the lessons learned from this use case having clear metrics of success upfront, investing in data quality and structure, starting with asmall-scalepilot can also be applied to future generative AI use cases as well.AIs general-purpose capabilities mean that it will eventually be a part of almost all aspects of how the government operates, from budget and HR to strategy and policy making.We have an opportunity to help shape how the government modernizes its programs and services within and across federal agencies to improve services for the public in ways previously unimagined or possible.
Matthew Graviss is chief data and AI officer at the Department of State, and director of the agencys Center for Analytics. Eric Stein is the deputy assistant secretary for the office of Global Information Services at States Bureau of Administration. Samuel Stehle is a data scientist within the Center for Analytics.
Originally posted here:
The Department of State's pilot project approach to AI adoption - FedScoop
- How much time do we have before Artificial General Intelligence (AGI) to turns into Artificial Self-preserving - The Times of India - November 5th, 2024 [November 5th, 2024]
- Simuli to Leap Forward in the Trek to Artificial General Intelligence through 2027 Hyperdimensional AI Ecosystem - USA TODAY - November 5th, 2024 [November 5th, 2024]
- Implications of Artificial General Intelligence on National and International Security - Yoshua Bengio - - October 31st, 2024 [October 31st, 2024]
- James Cameron says the reality of artificial general intelligence is 'scarier' than the fiction of it - Business Insider - October 31st, 2024 [October 31st, 2024]
- James Cameron says the reality of artificial general intelligence is 'scarier' than the fiction of it - MSN - October 31st, 2024 [October 31st, 2024]
- Bot fresh hell is this?: Inside the rise of Artificial General Intelligence or AGI - MSN - October 31st, 2024 [October 31st, 2024]
- Artificial General Intelligence (AGI) Market to Reach $26.9 Billion by 2031 As Revealed In New Report - WhaTech - September 26th, 2024 [September 26th, 2024]
- 19 jobs artificial general intelligence (AGI) may replace and 10 jobs it could create - MSN - September 26th, 2024 [September 26th, 2024]
- Paige Appoints New Leadership to Further Drive Innovation, Bring Artificial General Intelligence to Pathology, and Expand Access to AI Applications -... - August 16th, 2024 [August 16th, 2024]
- Artificial General Intelligence, If Attained, Will Be the Greatest Invention of All Time - JD Supra - August 11th, 2024 [August 11th, 2024]
- OpenAI Touts New AI Safety Research. Critics Say Its a Good Step, but Not Enough - WIRED - July 22nd, 2024 [July 22nd, 2024]
- OpenAIs Project Strawberry Said to Be Building AI That Reasons and Does Deep Research - Singularity Hub - July 22nd, 2024 [July 22nd, 2024]
- One of the Best Ways to Invest in AI Is Dont - InvestorPlace - July 22nd, 2024 [July 22nd, 2024]
- OpenAI is plagued by safety concerns - The Verge - July 17th, 2024 [July 17th, 2024]
- OpenAI reportedly nears breakthrough with reasoning AI, reveals progress framework - Ars Technica - July 17th, 2024 [July 17th, 2024]
- ChatGPT maker OpenAI now has a scale to rank its AI - ReadWrite - July 17th, 2024 [July 17th, 2024]
- Heres how OpenAI will determine how powerful its AI systems are - The Verge - July 17th, 2024 [July 17th, 2024]
- OpenAI may be working on AI that can perform research without human help which should go fine - TechRadar - July 17th, 2024 [July 17th, 2024]
- OpenAI has a new scale for measuring how smart their AI models are becoming which is not as comforting as it should be - TechRadar - July 17th, 2024 [July 17th, 2024]
- OpenAI says there are 5 'levels' for AI to reach human intelligence it's already almost at level 2 - Quartz - July 17th, 2024 [July 17th, 2024]
- AIs Bizarro World, were marching towards AGI while carbon emissions soar - Fortune - July 17th, 2024 [July 17th, 2024]
- AI News Today July 15, 2024 - The Dales Report - July 17th, 2024 [July 17th, 2024]
- The Evolution Of Artificial Intelligence: From Basic AI To ASI - Welcome2TheBronx - July 17th, 2024 [July 17th, 2024]
- What Elon Musk and Ilya Sutskever Feared About OpenAI Is Becoming Reality - Observer - July 17th, 2024 [July 17th, 2024]
- Companies are losing faith in AI, and AI is losing money - Android Headlines - July 17th, 2024 [July 17th, 2024]
- AGI isn't here (yet): How to make informed, strategic decisions in the meantime - VentureBeat - June 16th, 2024 [June 16th, 2024]
- Apple's AI Privacy Measures, Elon Musk's Robot Prediction, And More: This Week In Artificial Intelligence - Alphabet ... - Benzinga - June 16th, 2024 [June 16th, 2024]
- AGI and jumping to the New Inference Market S-Curve - CMSWire - June 16th, 2024 [June 16th, 2024]
- Apple's big AI announcements were all about AI 'for the rest of us'Google, Meta, Amazon and, yes, OpenAI should ... - Fortune - June 16th, 2024 [June 16th, 2024]
- Elon Musk Withdraws His Lawsuit Against OpenAI and Sam Altman - The New York Times - June 16th, 2024 [June 16th, 2024]
- Staying Ahead of the AI Train - ATD - June 16th, 2024 [June 16th, 2024]
- OpenAI disbands its AI risk mitigation team - - May 20th, 2024 [May 20th, 2024]
- BEYOND LOCAL: 'Noise' in the machine: Human differences in judgment lead to problems for AI - The Longmont Leader - May 20th, 2024 [May 20th, 2024]
- Machine Learning Researcher Links OpenAI to Drug-Fueled Sex Parties - Futurism - May 20th, 2024 [May 20th, 2024]
- What Is AI? How Artificial Intelligence Works (2024) - Shopify - May 20th, 2024 [May 20th, 2024]
- Vitalik Buterin says OpenAI's GPT-4 has passed the Turing test - Cointelegraph - May 20th, 2024 [May 20th, 2024]
- "I lost trust": Why the OpenAI team in charge of safeguarding humanity imploded - Vox.com - May 18th, 2024 [May 18th, 2024]
- 63% of surveyed Americans want government legislation to prevent super intelligent AI from ever being achieved - PC Gamer - May 18th, 2024 [May 18th, 2024]
- Top OpenAI researcher resigns, saying company prioritized 'shiny products' over AI safety - Fortune - May 18th, 2024 [May 18th, 2024]
- The revolution in artificial intelligence and artificial general intelligence - Washington Times - May 18th, 2024 [May 18th, 2024]
- OpenAI disbands team devoted to artificial intelligence risks - Yahoo! Voices - May 18th, 2024 [May 18th, 2024]
- OpenAI disbands safety team focused on risk of artificial intelligence causing 'human extinction' - New York Post - May 18th, 2024 [May 18th, 2024]
- OpenAI disbands team devoted to artificial intelligence risks - Port Lavaca Wave - May 18th, 2024 [May 18th, 2024]
- OpenAI disbands team devoted to artificial intelligence risks - Moore County News Press - May 18th, 2024 [May 18th, 2024]
- Generative AI Is Totally Shameless. I Want to Be It - WIRED - May 18th, 2024 [May 18th, 2024]
- OpenAI researcher resigns, claiming safety has taken a backseat to shiny products - The Verge - May 18th, 2024 [May 18th, 2024]
- Most of Surveyed Americans Do Not Want Super Intelligent AI - 80.lv - May 18th, 2024 [May 18th, 2024]
- A former OpenAI leader says safety has 'taken a backseat to shiny products' at the AI company - Winnipeg Free Press - May 18th, 2024 [May 18th, 2024]
- DeepMind CEO says Google to spend more than $100B on AGI despite hype - Cointelegraph - April 20th, 2024 [April 20th, 2024]
- Congressional panel outlines five guardrails for AI use in House - FedScoop - April 20th, 2024 [April 20th, 2024]
- The Potential and Perils of Advanced Artificial General Intelligence - elblog.pl - April 20th, 2024 [April 20th, 2024]
- DeepMind Head: Google AI Spending Could Exceed $100 Billion - PYMNTS.com - April 20th, 2024 [April 20th, 2024]
- Say hi to Tong Tong, world's first AGI child-image figure - ecns - April 20th, 2024 [April 20th, 2024]
- Silicon Scholars: AI and The Muslim Ummah - IslamiCity - April 20th, 2024 [April 20th, 2024]
- AI stocks aren't like the dot-com bubble. Here's why - Quartz - April 20th, 2024 [April 20th, 2024]
- AI vs. AGI: The Race for Performance, Battling the Cost? for NASDAQ:GOOG by Moshkelgosha - TradingView - April 20th, 2024 [April 20th, 2024]
- We've Been Here Before: AI Promised Humanlike Machines In 1958 - The Good Men Project - April 20th, 2024 [April 20th, 2024]
- Google will spend more than $100 billion on AI, exec says - Quartz - April 20th, 2024 [April 20th, 2024]
- Tech companies want to build artificial general intelligence. But who decides when AGI is attained? - The Bakersfield Californian - April 8th, 2024 [April 8th, 2024]
- Tech companies want to build artificial general intelligence. But who decides when AGI is attained? - The Caledonian-Record - April 8th, 2024 [April 8th, 2024]
- What is AGI and how is it different from AI? - ReadWrite - April 8th, 2024 [April 8th, 2024]
- Artificial intelligence in healthcare: defining the most common terms - HealthITAnalytics.com - April 8th, 2024 [April 8th, 2024]
- We're Focusing on the Wrong Kind of AI Apocalypse - TIME - April 8th, 2024 [April 8th, 2024]
- Xi Jinping's vision in supporting the artificial intelligence at home and abroad - Modern Diplomacy - April 8th, 2024 [April 8th, 2024]
- As 'The Matrix' turns 25, the chilling artificial intelligence (AI) projection at its core isn't as outlandish as it once seemed - TechRadar - April 8th, 2024 [April 8th, 2024]
- AI & robotics briefing: Why superintelligent AI won't sneak up on us - Nature.com - January 10th, 2024 [January 10th, 2024]
- Get Ready for the Great AI Disappointment - WIRED - January 10th, 2024 [January 10th, 2024]
- Part 3 Capitalism in the Age of Artificial General Intelligence (AGI) - Medium - January 10th, 2024 [January 10th, 2024]
- Artificial General Intelligence (AGI): what it is and why its discovery can change the world - Medium - January 10th, 2024 [January 10th, 2024]
- Exploring the Path to Artificial General Intelligence - Medriva - January 10th, 2024 [January 10th, 2024]
- The Acceleration Towards Artificial General Intelligence (AGI) and Its Implications - Medriva - January 10th, 2024 [January 10th, 2024]
- OpenAI Warns: "AGI Is Coming" - Do we have a reason to worry? - Medium - January 10th, 2024 [January 10th, 2024]
- The fight over ethics intensifies as artificial intelligence quickly changes the world - 9 & 10 News - January 10th, 2024 [January 10th, 2024]
- AI as the Third Window into Humanity: Understanding Human Behavior and Emotions - Medriva - January 10th, 2024 [January 10th, 2024]
- Artificial General Intelligence (AGI) in Radiation Oncology: Transformative Technology - Medriva - January 10th, 2024 [January 10th, 2024]
- Exploring the Potential of AGI: Opportunities and Challenges - Medium - January 10th, 2024 [January 10th, 2024]
- Full-Spectrum Cognitive Development Incorporating AI for Evolution and Collective Intelligence - Medriva - January 10th, 2024 [January 10th, 2024]
- Artificial Superintelligence - Understanding a Future Tech that Will Change the World! - MobileAppDaily - January 10th, 2024 [January 10th, 2024]
- Title: AI Unveiled: Exploring the Realm of Artificial Intelligence - Medium - January 10th, 2024 [January 10th, 2024]
- The Simple Reason Why AGI (Artificial General Intelligence) Is Not ... - Medium - December 2nd, 2023 [December 2nd, 2023]