Artificial Inundation: AI Is the Future and We’re Living in It

We’re still in the first quarter of the fiscal year and headed toward the holiday season. Historically, that predicates a slower pace across the federal sector, but not this year. This year, artificial intelligence (AI) is having a moment, and nearly everyone across the public sector, including the White House and The Office of Management and Budget (OMB), has something to say about it.

The President recently released Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. That was almost immediately followed by the proposed OMB memo open for comment titled Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence. Both are really a first of their kind coming from the executive branch, and an attempt at a roadmap of sorts to steer both the conversation and policy surrounding the latest technologies that have pervaded the IT world as of late.

Tell Me What I Need to Know

It’s a bit of a wild west atmosphere right now when it comes to AI, how it’s being developed, used and secured. The cart left before the horse. So now we’re all collectively yanking the cart back a bit and hoping to restart it on a less winding road. A road that Europe and others worldwide have also begun to pave.

Both the executive order (EO) and OMB memo address AI and note a few prevailing themes in both documents. The crux is clear: more security, privacy and overall governance is required as we navigate the possibilities artificial intelligence offers. Let’s break down the key points for those categories you’ll want to consider, as well as the opportunities they offer information technology (IT) vendors as we see this all built out over the coming years.


One of the top priorities in the EO and OMB’s memo is security as it relates to the development and use of AI. The executive order contains over 125 mentions of the word “security,” so it’s undoubtedly a top concern. As a start, the President will be requiring the developers of AI systems to share their safety test results and critical information with the government prior to making any system public. The hope of course is to thwart any security risks prior to inception.

Security is also in play as it relates to vulnerabilities in software. This is a call-out to all cybersecurity vendors, as building and establishing an advanced cybersecurity program developing AI-related tools to fix vulnerabilities in critical software will be sought after by the administration. As an accompaniment of sorts to this, there is a Cyber Challenge currently underway. The Defense Advanced Research Projects Agency (DARPA), in collaboration with several top AI companies – Anthropic, Google, Microsoft, and OpenAI - will feature almost $20 million in prizes, seeking the creation of innovative technologies to rapidly improve the security of computer code. If you’re a vendor in this space, don’t miss this opportunity to contribute to the front-end of these foundational technologies through fiscal year (FY) 2025.

As a final note on security, the administration is also clear to note it expects agencies to update their individual cybersecurity protocols to accommodate AI applications, including continuous authorizations for AI. The overarching theme is to have every agency playing from the same deck of cards, or security standards, as AI continues to pervade the market.


Security is often coupled with privacy, and the concern of protecting it. Both the White House and OMB are making it abundantly clear that mitigating the exploitation of personal data and protecting against unlawful discrimination and bias is imperative to any use of AI. Agencies will be seeking privacy-enhancing technologies (PETs), to protect against these threats. Vendors with technologies encompassing these capabilities should keep an ear out for opportunities in this space. We know from previous federal guidance that clean, actionable data leads to the best insights, and maintaining the appropriate levels of privacy while solving problems utilizing AI capabilities is and will continue to be the gold standard.


Finally, speaking of all that goes into developing and securing – someone has to be responsible at the top of any agency for it all. What says governance more than a dedicated role solely overseeing all that is AI? Both pieces of legislation note the expectation for federal agencies to designate a Chief AI Officer, responsible for coordinating their agency’s use of AI, promoting innovation in their agency, and ensuring risk management protocols.

The President has also called for coordination of the use of AI across the federal government, as well as charging the Director of OMB to convene and chair an interagency council to coordinate the development and use of AI in agencies’ programs and operations.

The stage is set. There’s action and standards being crafted from personnel to privacy, to the development and secure usage of AI. FY24 is sure to be AI’s prime time for development, and having conversations now with program and procurement officials is sure to pay dividends for years to come.

To get more TD SYNNEX Public Sector Market Insight content, please visit our Market Intelligence microsite.

About the Author:
Susanna Patten is a senior analyst on the TD SYNNEX Public Sector Market Insights team covering tech domain centric trends across the Public Sector.