CHAPTER 9
AI for Agencies

ON A COLD FEBRUARY morning in 2021, one of the co-authors of this book received an urgent message from a government contractor executive. She, along with a group of several firms, was trying to respond to an AI-related RFP. She wanted Professor Naqvi to review the proposal before submission. When Professor Naqvi reviewed the RFP, he observed that both the questions and the answers related to the software quality were not applicable to machine learning (ML) systems. The RFP, Professor Naqvi recognized, must have been developed by the staff that was not trained in AI. Professor Naqvi explained to the executive that while the question seems to be for the post development, deployment, and production-related integration stage of ML, testing for machine learning application development is widely different than that of non-learning software. Since machine learning software develops from data, its development process is different. It requires training the learning algorithm. For example, among others, some of the issues in testing are:

  • Understanding the features of data and initial testing for performance potential and informativeness of features (for example, calculating entropy) and studying mathematical characteristics of the data;
  • Dividing data into development/training, cross-validation data, and testing data;
  • Selecting various algorithms for testing and studying their performance;
  • Understanding the population dynamics from which data is selected and its evolutionary features;
  • Testing for human bias and data issues;
  • Testing for overfitting or underfitting, understanding cross validation, variance and bias;
  • Applying back-testing protocols where applicable;
  • Testing model dynamics and dimensionality;
  • Testing for stability in relation to population distribution changes;
  • Testing for life cycle and testing for ethics and governance.

Once developed, production and deployment testing are performed using various data. The AI system becomes part of the existing technology infrastructure and is integrated. At this stage, deployment testing requires some of the testing that uses traditional approaches—but it is the last stage of testing and relatively less complex and more deterministic than the testing explained above. Unfortunately, this was not the only example where existing templates for non-AI solutions were being used to source AI solutions.

Since 2016, the government has sourced several AI projects. The process for sourcing starts when the government issues a request for proposal (RFP) for technology projects. Much of the sourcing is undertaken by GSA (General Services Administration). However, even though the process has been going on for years now, we noticed something peculiar about the RFPs related to AI. They were structured as if they were for legacy (non-AI) projects. In many cases the questions were posed as if they were for non-AI solutions. This implied that the buyer did not have a good understanding of what was needed to develop and deploy AI systems.

But that was symptomatic of something far more problematic. It showed that much of the AI adoption could be only because of the directives issued by the president or the OSTP assistant director. From an organizational and change management perspective, it meant people were rushing to embrace the technology just for the sake of doing it and to comply with the directive. They were not actually thinking, creating, designing, and developing new process maps. AI is not just about automating manual processes. More than anything else, it is about rethinking how work is accomplished and about finding new types of work. In other words, you are not just automating the existing process, you are also innovating the work itself. This also includes thinking and imagining how everything else that surrounds a certain type of work will change. This will become clear with an example. The autonomous car is not only automating driving, it is also increasing the efficiency and safety for driving. The autonomous machine does not drink alcohol, require sleep, or get distracted to answer phone calls. At an advanced level, the car will understand the personality and emotions of its drivers. But having an autonomous car implies that builders have to rethink about the capacity of the parking lots—since cabs could be out on the road constantly and that will make owning a car less desirable. It will also have an impact on organ donations since there will be fewer accidental deaths—the primary source of organ donations. It will require rethinking the energy strategy and developing the city infrastructure that is conducive for driving. And so on.

THE AGENCY PROBLEM

It was mid-2017 and the nation had somewhat recovered from the 2016 election shock. As focus returned to building the American future, many people began thinking about AI. AI was the new thing, and its popularity was growing by the minute.

Justin Herman—also known as Justin (Doc) Herman from GSA—managed LISTSERVs (email mailing list) known as Public AI. In October of 2016, right before the elections, GSA created three new information-sharing initiatives, one of which was AI. The idea behind launching the digital communities was to provide a platform for sharing best practices and ideas for federal technologists and managers. A month later, America's attention was consumed by the elections and their aftermath. In the emails sent to the mailing list, Justin clarified the goals for the government.

He argued that customer service initiatives can be greatly improved by incorporating next-generation digital public services powered by government data and new advances in artificial intelligence. Public services can become more open, responsive, informative, and accessible. He foresaw that natural voice recognition systems will revolutionize citizen engagement via personalization and automated delivery. The government began organizing workshops, and demand from agencies exploded to learn and experiment with AI. Collaboration about best practices, security, policy, and privacy was encouraged.

As an early appreciator of AI, Herman recognized that some type of coordination across the agencies was necessary to drive adoption. He also realized that too many soft concepts are being spread about AI. Some people were talking about the future of work, others about consulting models, and yet others about governance-type models. He wanted to develop a no-nonsense and pragmatic perspective for the government. He has a powerful sense of humor and good people skills. Through that simple initiative of sharing information, he was able to bring public and agencies together on a single platform. One of the authors met Justin at a conference and found out that Justin had even bigger plans.

The government has many internal LISTSERVs where thousands of technologists share information. Justin wanted to not only bring internal government managers together but also connect them to an external ecosystem with the public, entrepreneurs, and other stakeholders. In August of 2017, Herman wrote in a blog entry:

Open data and emerging technologies—including artificial intelligence and distributed ledgers, such as blockchain—hold vast potential to transform public services held back by bureaucracy and outdated IT systems. We are opening the doors to bold, fresh ideas for government accountability, transparency and citizen participation by working with U.S. businesses, civil society groups and others to shape national goals for emerging technologies and open data in public services. (Herman 2017/Digital.Gov/Public Domain)

By September of 2017, the interaction had increased tremendously.

Such initiative also led to the establishment of a database where use cases were listed by various areas. The use cases could have been provided by anyone. They could come from the public or an agency. The idea was to create a large collection of use cases via an “Emerging Citizen Technology Atlas,” which highlighted and published:

  • Federal use cases;
  • Ideas/concepts;
  • Programs;
  • Resources;
  • Events; and
  • Media.

A little more than a year after the initiative started it had more than 2000 members and significant sharing of information was going on between managers across agencies. Herman continued to clarify that the focus of the initiative was on pragmatic aspects and urgency. In his communication he continued to push for more practical projects and not some futuristic Johnny 5 from Short Circuit–type innovations. This focus on what's important now in public service and the associated sense of urgency drove the initial adoption.

This avenue opened up opportunities for agencies to collaborate with each other as well as with the private sector, where AI start-ups and emerging firms were encouraged to showcase their innovation and technologies. In one of such sessions organized by the group was a keynote from Nvidia, which focused on the most practical, real-world uses of AI.

As ideas of innovations develop, they tend to retain the initial conditions that led to their creation. In other words, at the inception of major transformations, social and cultural information is embedded and carried forward in subsequent growth trajectories of innovations and ideas. As early as 2017, as agencies paid attention to AI, the concept of interagency collaboration, crowdsourcing ideas, and pragmatic technology focus solidified. What was also embedded in the social construct was the need to do something quickly, to show results, to bring in technology innovation, to be practical and pragmatic, and to develop use cases that can be turned into products quickly. As two-way communication between the government and public was opened, there was no shortage of ideas. But what that was also doing was to shape the concept of what AI is for the government. Economic historians have pointed out the complexity of embracing technology in times of rapid technological changes. The interpretation of what constitutes a particular technology goes a long way to define the future of the technology and its adoption.

This government initiative, while commendable, also meant that the thinking was largely focused on doing something now and doing something practical. At that time the social perception or social construction of what AI was either too vague and Hollywood like, or too basic, such as robotic process automation (RPA). Machine learning was complicated for most people to understand and required building new skills, getting access to data sets, and developing a different vision of the future than what simple automation entailed. Without a comprehensive vision, the dominant design at a social and cultural level in the government, the common denominator turned out to be RPA. This was the automation that everyone understood, and it was composed of digital bots automating simple repeatable manual tasks. RPA quickly became the face of AI in the government.

One can appreciate the position Herman and his colleagues were in. There was a lot of talk about AI, but no concrete efforts were made to achieve the transformation. There was push from the top—both from the OSTP now playing an active role to push the agencies and from the agency heads. No comprehensive strategic plans existed for agency transformation with AI. There was no meaning given to what AI is at a cross-agency level. The level of AI competency was different in different agencies. Even the senior leaders did not understand how to empower their agencies with AI. It was a buzzword, and there was enough buzz around it, but no practical steps were being taken to move forward. It is at that time when people like Herman came together and began formulating a plan to push the AI in their agencies. They were not concerned about the grand plans or strategies. They did not care about what the robots would look like five years from that time. They wanted to see AI in action. They wanted to see results. They wanted to be able to say that the US government was getting AI. Hence, the tactical adoption of AI in the government preceded any strategic thinking—that is if strategic thinking ever entered in the process.

In this case we don't blame Herman or his colleagues. They did what needed to get done. Just as a combat team that was not rescued and left to die, and the team had to apply tactical adaptation and improvise to survive, government employees such as Herman were filling the vacuum created by the lack of leadership from the executives. This is not the ideal way to go about a change as powerful as AI, but it was the best they could do.

First, the pressure to do something and to do something now forces people to think about the simplest, low-hanging opportunities. Hence, they don't try to solve complex problems or even think about broad opportunities, they do what they need to do to make it look like things are moving forward.

Second, the urgency places pressure to recruit or bring in a supplier quickly to show progress. This does not give enough time for the buyer to properly determine their acquisition needs and the capabilities they need or should seek in a supplier.

Third, the term “practical” is often used to signify the opposite of strategic. The action orientation unleashed by pragmatism is viewed as an antidote to passive ivory-tower strategic orientation. While it is true that action is better than analysis-paralysis state or a state of perpetual passiveness, what is also true is that practical actions without strategic orientation and deeper thinking can lead to inefficient adoption of technology. You feel as if you are moving ahead, but in reality you are static or falling behind. The activity itself, and not the associated results, creates a deceptive measure of progress.

Fourth, the sharing of information across agencies, while helpful, does assume that the best outcome would be limited by the collective knowledge frontier of the agencies. In other words, the learning from each other would only be as good as the most advanced player in the group. While it is true that learning can generate ideas and new ideas can emerge via sharing of information, the style and processing of information greatly constrain this type of creativity.

Fifth, the interaction of the people—both government employees and outsiders—creates the model that defines what is pragmatic and practical. People's backgrounds, their vantage points, their experience, and their political considerations greatly affect what business models they form and how they form them. When groups consider these matters, they tend to look at the common denominator or constraints and opportunities. This implies that models that emerge are the best political models and not the best functional models.

Sixth, at the early stage of technological revolution development, suppliers and entrepreneurs also do not possess the broader strategic models. They function with limited information, often trying to solve a small problem. Their resources are limited, and in many cases their products and services are not fully developed. Based on their marketing efforts, business development, existing relationships, and PR outreach, they can influence decision makers to turn the process in their favor. This can create unpredictable trajectories for technology growth.

Seventh, collecting use cases does not mean you have the best solution visions. AI creates new business models. Use cases tend to represent the current realities and are often based on automating the existing processes. But the AI revolution is about changing the business models.

Eighth, and most importantly, we cannot ignore the presence of an adversary on a global scale. Everything that gets done requires critical thinking to evaluate what the adversary will do.

Hence, what we call collaboration under these conditions tends to be greatly biased, and teams lack reflexivity to challenge their own assumptions.

The truly commendable efforts by Herman and his colleagues led to the creation of a large reserve of ideas and use cases.

Herman would not last in the government beyond January of 2019. After giving the government a decent start, he went to work for the private sector.

STRATEGIC PLANS

As thinking about AI matured a bit more than what it was in 2016, by 2018 agencies began developing their AI strategic plans. Most of the plans were not really strategic plans, but they represented the high-level will or aspirations. For example, the Department of Defense 2018 Plan (DoD 2018) outlines the following five strategies:

DELIVERING AI-ENABLED CAPABILITIES THAT ADDRESS KEY MISSIONS

Under this aspiration DoD committed to push several initiatives in which AI will be implemented “rapidly, iteratively, and responsibly.” High-level areas of impact were identified as improving situational awareness and decision-making, increasing the safety of operating equipment, implementing predictive maintenance and supply, and streamlining business processes. This aspiration was based on automating tedious cognitive and physical tasks to free up talent deployment for more strategic deployment.

SCALING AI'S IMPACT ACROSS DOD

DoD was aspiring to establish a common foundation over which AI can be scaled across the agency via decentralized development and experimentation.

In the plan DoD views the innovative character of the American forces as a major capability and recognizes that the spirit of experimentation will lead to innovations. These innovations will be discovered by users who can then scale and deploy it for their use. Hence, the role of DoD should be to provide a platform for decentralized development and discovery. The DoD used the term “democratize access to AI.” This, DoD claimed, will lead to scaling and adoption.

CULTIVATING A LEADING AI WORKFORCE

This included changing the culture of the organization as well as retraining the workforce to learn new skills. It also implies recruiting top AI talent.

Engaging with commercial, academic, and international allies and partners. This implies establishing deep alliances and partnerships with various parties who form the ecosystem of AI. This includes private sector, academia, and interagency cooperation.

LEADING IN MILITARY ETHICS AND AI SAFETY

This is about ethics, governance, and safety of AI. It includes focus areas such as explainable AI, testing, evaluation, certification, use styles, and validation.

The above aspirations, which are called strategies by DoD, require some analysis. There could be some unrelated or misleading assumptions embedded in the aspirations. The assumptions could be based on a non-AI-centric foundation, or they could be based on terms that are often developed by marketing research or consulting firms that sound good but don't carry any deeper meaning.

For example, while the capability of providing forward-edge solutions with reusability and access to users can be a viable option in non-ML technologies and RPA, an ML system developed to solve a problem for one area is most likely not usable for another area. The reason is that the underlying data distributions and the data used to develop the first solution represented its own problem domain. When the problem domain changes, there will be need to develop the solution again. The new solution implies not only using new problem domain-specific data but may also employ different models and algorithms. This means that something developed in one area may not be usable in another area. Similarly, even when a solution is developed to solve a particular problem, when the underlying data of that problem changes, a new solution would need to be developed. This could include discovering additional data, finding more relevant or additional features, or identifying that the underlying distributions have changed. This means that there are no scalability or user selection-based adaptation to new problem domains.

Similarly, if the term “democratization” implies decentralized experimentation, then it begs the question of how this will be integrated, coordinated, and brought back together to connect the parts in a whole, where the whole is greater than the parts. Enterprises can perpetually experiment, but if there is no way to connect the parts back to the whole, the part will only create extra cost in the long run. The history of legacy IT shows that centralized solutioning tends to work better. For example, the enterprise resource planning systems, the shared services models, and the customer relationship management models are all based on centralized planning and not decentralized sparks of innovation.

Then, the aspiration about hiring top talent is somewhat fictional. The best AI talent is picked by Big Tech and top financial firms. To hire that talent, government will have to compete with off-the-charts offers.

Then many other questions remain unanswered. For example, who will establish the allocation of resources and on what basis? How will priorities be determined?

Most importantly, who will determine that DoD is adhering to the strategy? How will the success be measured and determined? How will the five strategies translate into executable projects?

The above strategy of DoD led to the creation of JAIC, which was described as follows in the plan:

We established a Joint Artificial Intelligence Center (JAIC) to accelerate the delivery of AI-enabled capabilities, scale the Department-wide impact of AI, and synchronize DoD AI activities to expand Joint Force advantages. Specifically, the JAIC will: Rapidly deliver AI-enabled capabilities to address key missions, strengthening current military advantages and enhancing future AI research and development efforts with mission needs, operational outcomes, user feedback, and data; Establish a common foundation for scaling AI's impact across DoD, leading strategic data acquisition and introducing unified data stores, reusable tools, frameworks and standards, and cloud and edge services; Facilitate AI planning, policy, governance, ethics, safety, cybersecurity, and multilateral coordination; Attract and cultivate a world-class AI team to supply trusted subject matter expertise on AI capability delivery and to create new accelerated learning experiences in AI across DoD at all levels of professional education and training. (DoD 2018, p. 9)

However, by April of 2021, JAIC was facing budget cuts, but that was seen as an opportunity to bring in even more AI, faster. FCW reported:

Lt. Gen. Michael Groen, the JAIC's director, said budget constraints in current and potentially future fiscal years will only increase the department's need for enterprise-level artificial intelligence capabilities.

“In an era of tightening budgets and a focus on squeezing out things that are legacy or not important in the budget, the productivity gains and the efficiency gains that AI can bring to the department, especially through the business process transformation, actually becomes an economic necessity,” Groen told reporters April 9.

“In a squeeze play between modernizing our warfare that moves at machine speed and tighter budgets, AI is doubly necessary,” he said. (Williams 2021/Government Executive Media Group)

But the reality, as we point out throughout this book, was observed by Jacqueline Tame, JAIC's acting operating director. She said:

This is not a panacea, and that's a hard thing for a lot of people to swallow. You can't just sprinkle AI on all these legacy systems and expect them to work and talk together, especially when an adversary is actively trying to hack or jam your communications. That's not how it works. There's a mental model shift that has to happen across the department. The first step [is] not particularly sexy….The first step is helping to educate the department and our partners and all of our stakeholders. (Jr 2021/Breaking Media, Inc.)

This was the most realistic assessment of the situation we had seen from anywhere. In May of 2021, Tame left and became a strategic advisor to DoD and took an advisory role in various companies.

Right before the year 2021 ended, DoD announced that it would be hiring a chief artificial intelligence officer (CAIO). The problem the new CAIO will face is not only he or she may have to do a lot of backtracking and ripping out existing technology but also in a technology universe dominated by CIOs and CDOs, the CAIO will have to define a new territory and face several political issues. It is likely that the collective immune system of agencies that is designed to stop change will work against such efforts.

Many AI-related RFPs were sent out by various organizations—including HHS, HUD, DSA, and DOC. Some RFPs did not reflect what was needed, others were less about AI and more about legacy IT, and yet others did not go anywhere since the strategic priorities were changed.

THE CIO AND AMERICAN AI

In our discussion with a now-retired technology leader who has decades of executive experience, we learned about some of the main challenges for implementing machine learning projects in agencies. What we discovered was deeply disturbing.

The retired executive shared with us his frustration about the government. Not only the agencies have now become politically charged and ideologically governed, but the way they approach strategy and conflict resolution has come to a new low. The meetings among executives can become heated quickly, and personal agendas are prioritized over national interests.

This is not only from the fact that the country is experiencing an ideological war and political instability but also because the existing infrastructures of several government agencies are not ready for AI. Some agencies are more advanced than others and want to move ahead faster. Others are still struggling to do basic things that should have been completed in 1995 or 2005. Significant efforts are spent on fixing existing problems, and that keeps people occupied. The spaghetti of systems has made it impossible to develop a modernization perspective. You can't throw out the entire information technology base, the executive explained. You have to work with what you have. When you are spending significant time and effort to fix your current problems, who has the time and resources to focus on innovation? No one is thinking big because no one can afford to think big, he mentioned. Add to that the pressure from the top is increasing to show progress in AI. So what will people do? They will just make up projects to show progress.

Some of the early projects by JAIC included fixing the Army accounting system—which was done via robotic process automation (RPA) and some machine learning (Barnett 2020). The result was to match transactions. That was modernization and adoption. The Defense Innovation Unit was fixing accounting errors with RPA.

We noticed that many agency modernization RFPs were sent out that simply did not cover the AI part appropriately. A majority of the focus was on just getting the old tech working. There was nothing modern about them in relative terms. So much so that even the modernization paths were not structured to develop the next generation of technology. It was as if the only thing these RFPs were focusing on was the immediate next move with no recognition about the following moves, the strategy of the game, or the adversary. They were rearchitecting the past—not building the future. And this reality of the state of technology is important to consider when making plans for the future. Your data comes from these systems. You cannot build advanced AI if your underlying systems fail to provide you with the data efficiently.

As the internal political and other battles intensified, the pressure to do something increased on all agencies. Caught between the burst of AI-related legislations and executive orders, the agencies were forced to show progress. This drove them to come up with at least some semblance of strategy. Many agencies published strategies and plans for AI-based transformation. In some agencies, at least some basic level of strategic structure started taking shape.

It was recognized that CIOs needed some level of training and skill development to handle the modern challenges. They needed to be the architects of the transformation. They were expected to develop the American AI to confront China.

The CIO Council, an initiative to educate and enhance the skills of CIOs, published a handbook to uplift the skills. The goal of the book was explained as follows:

As a business executive, the Chief Information Officer (CIO) challenges executive leadership to think strategically about digital disruptions that are forcing business models to change and technology's role in mission delivery. As a technology leader, the CIO enables and rapidly scales the agency's digital business ecosystem while concurrently ensuring digital security. The CIO drives transformation, manages innovation, develops talent, enables the use of data, and takes advantage of evolving technologies. (CIO Council n.d/The Chief Information Officers Council/Public Domain)

Despite having a powerful and transformational vision, nearly 40 percent of the content of the book was about legal and regulatory matters that CIOs have to comply with, and the rest was about the old technology. There was no reference to artificial intelligence at all and just one brief reference to machine learning.

The opening line in the excerpt quoted from the handbook identified CIOs as “business executives,” yet no effort was made to train these business executives on how to deploy a comprehensive process to develop the AI transformation strategy for their agencies.

Legislative initiatives and the OSTP-led mandates do not accomplish results unless they are grounded in on-the-ground realities. They do not consider the underlying realities. They do not remove the real organization and perception barriers.

STRATEGY DEVELOPMENT IN AGENCIES

The strategy development process for a business, for an agency, or for a country requires applying well-developed methods and processes. It requires integrated planning and studying the complex nature of how business, economic, social, competitive, and political environments develop. But it also requires an in-depth understanding of operational, organizational, and financial constraints. It makes a strategist rise above the tactical aspects of day-to-day processes and analyze the forest while simultaneously diving into the operational details and then rising back above the trees. This rising up and down continues as the strategist identifies the pathways that lead to the mission success. This becomes even harder when the competitive environment contains a significant competitive threat. A little slip, and you can lose your competitive edge.

Coming up with the most obvious aspirations—develop skill, cooperate with industry, and develop solutions—is not a strategy. It is not only immature but also a highly naive way to think about strategy.

We stand with the federal employees who are trying to keep a balance between their daily struggles and at times unrealistic push from the top to embrace AI. AI is not a magic wand. It is not a discrete or linear process where you can simply throw enough resources and the technology system will be created. It is a revolution of its own, and it works with its own rules.

THE DLA STORY

Collin J. Williams, a Defense Logistics Agency (DLA) historian, shared the story of SAMMS: Standard Automated Materials Management System. It was launched by DLA to integrate logistics functions spread over multiple supply chains that DLA managed. The work on the system started in 1964 when an HQ-based team started writing the code. The program ran into problems when an appropriate operating system was not identified. After several course adjustments and bouncing back from failures, DLA eventually launched the system in 1971—four years past the expected start date. But as soon as it was implemented, DLA experienced revolutionary productivity gains. Williams writes:

DLA leaders informed the GAO of recent program changes and installed SAMMS only a few months later at the Defense Construction Supply Center in Columbus, Ohio. Performance improved immediately. Before the year ended, the system reduced back orders from 153,000 to 64,000, increased material availability from 78.9% to 89.8% and increased the on-time fill rate from 61.8% to 71.5%. It did so with 354 fewer people.

Other parts of the agency realized similar improvements in 1973 when DLA installed SAMMS at the Defense General Supply Center in Richmond, Virginia; Defense Electronics Supply Center in Dayton, Ohio; and Defense Industrial Supply Center in Philadelphia, Pennsylvania. (Williams 2020/Defense Logistics Agency)

DLA later connected SAMMS to the Defense Integrated Data System and Standard Automated Materiel Management Telecommunication System. While SAMMS was later retired, it provides a great example of technological transformation means and how it creates measurable results.

In the 1990s and 2000s, DLA went through further modernizations. In 2017, from a study done by DLA, an AI-related modernization opportunity was identified.

Later in 2019, as DLA was trying to develop a plan for transforming with AI, Manny Vengua, Weapon Systems Sustainment R&D program manager of the agency, identified four major constraints for implementing AI/ML:

  • Data: ensure easily accessible, reliable, and consolidated data;
  • Infrastructure: rapidly changing DoD IT, including consolidation of CIO and DoD cloud migration, challenge the implementation of AI computing resources and software approvals in new environments;
  • Training: AI/ML requires advanced skills (programming languages, platforms, and mathematics); industry competition for AI/ML talent is also a concern;
  • Governance: managing, documenting, and deploying AI/ML capabilities along with legal and ethical controls.

The moral of the story is simple. With good leadership and when people are given proper training and freedom to execute, they get things done. What they accomplish should be measurable and result in actual value. It should also be based on facts. Most importantly, the constraints should be properly understood and removed. As DLA indicated its constraints to implement AI, it will be able to develop realistic assumptions and plans. Even though the constraints are overwhelming and spread over the entire AI supply chain—composed of data, skills to develop models, infrastructure, and governance controls—we consider these challenges to be good news. The reason it is good news is because it is not some pie-in-the-sky delusion strategy or a lofty statement. It is real. With a history of making great accomplishments, as the above story shows, DLA will be able to lead in AI. But that is because the agency is operating with a sense of reality.

REFERENCES

  1. Barnett, Jackson. 2020. “JAIC Looks to Fix Errors in Army's Financial Accounting Systems.” [Online]. Available at: https://www.fedscoop.com/amry-ai-financial-management-system-diu-jaic/.
  2. CIO Council. n.d. “Chief Information Officers Council Handbook.” [Online]. Available at: https://www.cio.gov/cio-handbook/.
  3. Department of Defense. 2018. “Summary of the 2018 Department of Defense Artificial Intelligence Strategy.” [Online]. Available at: https://media.defense.gov/2019/Feb/12/2002088963/-1/-1/1/SUMMARY-OF-DOD-AI-STRATEGY.PDF.
  4. Freedberg Jr., Sydney. 2021. “Culture, Not Tech, Is Obstacle to JADC2: JAIC.” [Online]. Available at: https://breakingdefense.com/2021/02/culture-not-tech-is-obstacle-to-jadc2-jaic/.
  5. Herman, Justin. 2017. “Emerging Tech and Open Data for a More Open and Accountable Government.” [Online]. Available at: https://www.gsa.gov/blog/2017/08/24/emerging-tech-and-open-data-for-a-more-open-and-accountable-government.
  6. Williams, Colin Jay. 2020. “Former IT System Integrated Logistics Functions, Improved DLA's Project Management.” [Online]. Available at: https://www.dla.mil/AboutDLA/News/NewsArticleView/Article/2327074/former-it-system-integrated-logistics-functions-improved-dlas-project-management/.
  7. Williams, Lauren. 2021. “JAIC Feels Pressure to Go Faster as Tight Budgets Loom.” [Online]. Available at: https://fcw.com/it-modernization/2021/04/jaic-feels-pressure-to-go-faster-as-tight-budgets-loom/258184/.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.116.21.109