BY JASON MATTHEWS
When ChatGPT debuted in 2022, workforce agencies immediately recognized opportunities to enhance job matching, accelerate communications, and streamline services. While initial adoption raised important policy questions about security and accuracy, states continue to move forward proactively to harness AI’s benefits.
In 2023, Maine initiated a brief pause on ChatGPT within state agencies to carefully evaluate cybersecurity and misinformation risks. Michigan and New York also introduced thoughtful measures emphasizing privacy protection and human oversight. At the federal level, agencies like the Department of Veterans Affairs implemented similar precautionary guidelines to ensure responsible AI use.
Today, professionals nationwide remain eager for clearer guidelines to fully embrace AI’s potential. Recent NVTI surveys show a strong consensus among workforce specialists on the need for structured guidance and training to maximize the positive impacts of AI.
In Part One of this series, we explored how AI tools such as ChatGPT, Grammarly, and Microsoft Co-Pilot significantly improve client interactions, administrative efficiency, and data-driven decision-making. While regional adoption varies due to evolving state regulations and practitioner familiarity, the overall trajectory points toward increasingly thoughtful integration. In this article, we examine these regional differences in detail, highlighting practitioner experiences, understanding cautious state approaches, and identifying opportunities to align policy with ethical and practical use. Our aim is to support workforce professionals in confidently integrating AI tools to enhance service quality without sacrificing security or the personalized support that veterans and job seekers rely on.
Regional Trends in AI Adoption
Across the United States, workforce agencies are increasingly exploring how to integrate AI tools effectively, supported by ongoing policy developments at the state and federal levels. Recent NVTI surveys indicate that most workforce professionals can utilize AI tools in their roles and show enthusiasm about the technology’s potential. However, many respondents express a clear desire for more defined guidelines to support confident and responsible use.
States are thoughtfully addressing AI adoption. In 2023, Maine took a proactive approach, implementing a temporary pause on generative AI tools to thoroughly evaluate cybersecurity and misinformation concerns (1). Similarly, in 2024, Michigan introduced measures to protect sensitive legislative information on official devices (2). New York established frameworks emphasizing responsible human oversight for critical AI-driven public services, such as unemployment assistance (3).
In other regions, states are actively developing explicit AI policies, working to eliminate uncertainty. Professionals continue to adapt flexibly to access ChatGPT and other tools, highlighting frontline determination to leverage AI’s benefits.
At the federal level, agencies including the Department of Energy and the Department of Veterans Affairs have taken thoughtful steps to ensure secure, reliable use of AI, reinforcing a balanced approach between innovation and data protection (4).
Voices from the Field: What Practitioners Are Saying
Frontline workforce professionals are actively shaping AI integration, finding practical ways to leverage innovative technologies despite evolving policies. Recent NVTI surveys highlight their strong enthusiasm and innovative approaches toward AI tools at work.
While some professionals have questions about security and accuracy, these concerns underline a widely shared desire for clear guidelines and targeted training. A Nebraska respondent emphasized the importance of protecting personal information, reflecting broader priorities shared across the workforce.
Practitioners often creatively navigate current limitations. One Indiana specialist described overcoming device restrictions by accessing AI tools via a personal phone, highlighting the determination of workforce professionals to utilize beneficial technologies effectively.
Interest in formal AI training remains exceptionally high. A respondent from Michigan summarized this optimism: “It’s unavoidable. We either embrace change or get left behind.” (NVTI Survey Results 2025).
Overall, workforce professionals express strong optimism about AI’s ability to enhance their services, highlighting the value of clear policies, supportive training, and consistent guidelines to confidently maximize AI’s potential.
Why Some Regions Are Holding Back
- Security & Privacy:
- Maine: Briefly banned AI over cybersecurity and misinformation risks (1).
- Michigan: Limited AI use on legislative devices for data security (2).
- Federal: Agencies restricted AI tools, citing sensitive data concerns (4).
- Workforce staff worry about protecting personal information.
- Unclear Policies:
- New York: Required assessments and human oversight of AI use (3).
- NVTI surveys consistently report confusion around AI policies.
- Ethical Concerns:
- New York explicitly protects human roles in critical public services (3).
- Staff fear AI may lack necessary human nuance (NVTI Survey Results 2025).
- Budget Constraints:
- Arkansas survey: “Right now we use Teams” due to budget limits (NVTI Survey Results 2025).
- Many regions cite high costs as barriers to adopting advanced AI tools (NVTI Survey Results 2025).
Charting a Clear Path Forward: Aligning Policy, Practice, and Ethical AI Integration
Workforce agencies stand at a critical point, balancing AI’s potential with policy complexities. Insights from states like Maine, Michigan, and New York, combined with frontline feedback, highlight clear ways to align policies with practical applications.
Clear and Consistent Guidelines – Workforce agencies must partner closely with state policymakers to create explicit AI usage guidelines. NVTI surveys highlight widespread confusion; clearer policies will help professionals leverage AI confidently and securely. Regions successfully using AI should share best practices through pilot projects, serving as models for responsible innovation. NVTI can promote cross-regional exchanges and case studies demonstrating effective AI implementation.
Targeted Training Initiatives and NVTI’s Ongoing Role – Frontline professionals express a strong interest in formal AI training. Although NVTI doesn’t yet offer AI-specific courses, it will continue to address this demand by focusing on ethical use, data privacy compliance, and practical strategies for integrating AI responsibly into daily workflows. NVTI is exploring ways to support AI adoption by gathering data, facilitating regional learning, and highlighting successful case studies. These efforts align closely with NVTI’s mission to equip veteran service providers effectively.
A Unified Vision for the Future
Effective AI integration depends on aligning policy, practice, and ethical considerations. Workforce agencies must maintain transparency and prioritize human-centered values. NVTI remains dedicated to ensuring technology enhances, rather than detracts from, the critical work of supporting veterans and job seekers. 
Sources
[1] Maine Office of Information Technology: Cybersecurity Directive on AI tools (June 2023)
[2] GovTech: Michigan Senate Limits AI Access (November 2024)
[3] Times Union: New York State AI Oversight Legislation (December 2024)
[4] FedScoop: Federal Agencies Restrict AI Tools (April 2025)
[5] Lexology: Alabama and Oklahoma Ban Foreign-linked AI (April 2025)
[6] NVTI: National Veterans’ Training Institute Course Descriptions