The Future of AI Governance

by John Derrick 
February 15, 2024

Find out how AI governance mechanisms are essential to establishing trust in AI/ML neural networks and processing systems, starting with the measurement of training content impact.

In today’s digital landscape, AI technologies are accelerating industries, driving innovation, and transforming the way we work and live. At the same time, while these technologies can consolidate volumes of information and content, delivering profoundly innovative use cases, they also carry great risks in both content accuracy and appropriate use. Great power brings great responsibility, and the need for robust AI governance has never been more pressing.

Evolution and Advancement

I have been immersed in the rapidly evolving world of artificial intelligence and machine learning (AI/ML) for almost two decades with a focus on technologies that can transform markets by solving hard(er) problems. Several of these instances involved semiconductor-based hardware systems, while others were pure software to make at-scale compute more efficient and more performant. 

Over the past decade or more, I’ve had the opportunity to assist with many software companies in the AI/ML space. Whether using AI/ML to optimize genetics for biological factories, providing a platform for AI/ML workloads across CPU, FPU, and FPGA hardware accelerators, or tripling optimized performance for AI/ML inference, the technologies applied within AI/ML neural networks and processing systems have not only made great strides in data set size and processing, but also some significant advances in formulation of problems. 

The first neural network I personally constructed was very, very limited and was based on a switched capacitor structure back in my university days and only mimicked a couple of synapses. The advances since have been enormous, and it is not hyperbole to say exponential with hundreds of thousands of times the compute available and models that are millions, even billions of times larger today.  

However, even as these advances have occurred and we are seeing broader applicability and adoption of AI/ML, we still face several monumental challenges that have not been addressed because ultimately AI/ML systems rely on content, data, and appropriate training and governance of operation. 


Effective AI Governance Builds Trust

For a while, and certainly during 2023, the challenge of trustworthiness of operation, authenticity, and providence of training content have emerged as key factors for AI/ML adoption and use. Even with all other hype and advances in AI, these factors have not been adequately addressed and only recently gained the awareness and concern that they warrant.

It was for this reason, that during 2023 I set out to study and attempt to address the issue of measuring the impact and value of content within an AI/ML neural network. The result of that work was an invention and associated IP filing that are the basis for our work here at

This algorithm provides the ability to look within the structure and weights of a neural network over time and determine how the neural network is affected by new training data sets, operation, and tuning. This information and associated capabilities prepare organizations to build trust in their systems and efficiently address and correct issues as they are detected.  


Next Steps for AI Governance

To support the need for critical AI governance, was formed. Importantly, it is not a new method of creating a neural network but rather it allows periodic measurement and monitoring of neural networks to determine the value of training sets, impact of each training set on the computation of results, and identifies the contribution of each data set on the results that the neural network produces. 

The company will provide this capability as a SaaS, as well as an on-premises software solution and is applicable to any system that utilizes neural networks including generative AI and retrieval-augmented generation (RAG). If not obvious, this has application and immense value to companies that need to trust their AI, optimize training processes, and value the content that they use.

Going forward, organizations must prioritize building trust in their AI/ML models and establishing methods to understand the impact of training content on neural networks is paramount for effective AI governance.

Contact us to learn more about how you can build trust in your AI/ML neural networks and processing systems with