AI is transforming how private market investors evaluate opportunities and how portfolio companies operate day-to-day. But as adoption accelerates, so does the need for strong governance to ensure AI is deployed responsibly, ethically, and in alignment with emerging regulatory expectations.
In our recent webinar, the Malk Partners team explored why responsible AI must now be on every investor’s agenda – and what practical steps firms can take to oversee AI use across their portfolios. Below are the core insights we shared.
AI’s Expanding Risks and Opportunities
AI presents enormous potential for operational efficiency, deeper insights, and new avenues for value creation. However, the upside comes with real risks if not managed carefully. These include:
- Operational risks: errors, inaccuracies, or model failures
- Ethical risks: biased outputs, discriminatory outcomes, or misuse
- Regulatory risks: evolving requirements across jurisdictions
- Reputational risks: public scrutiny around transparency and fairness
As AI becomes embedded in products, processes, and decision-making, investors must ensure the right oversight is in place to manage these risks proactively.
ESG Implications of AI Use
Responsible AI is inherently an ESG issue, touching all three pillars:
Environment
AI systems can be resource- and energy-intensive. Evaluating the environmental footprint of AI use – particularly large-scale or compute-heavy models – is becoming an important part of data center and cloud strategy.
Social
AI can introduce or amplify bias, create inequitable outcomes, or limit accessibility. Ensuring models are fair, explainable, and human-centered is essential for protecting people who may be impacted by AI-driven decisions.
Governance
Strong governance is the backbone of responsible AI. Companies should establish clear policies, decision-making structures, and processes to guide AI use. This includes documentation, oversight committees, and defined accountability for AI-related risks.
Tangible Steps for AI Governance
AI governance matters regardless of whether AI is being built in-house or used through third-party tools. While governance expectations scale with company size, complexity, and the risk profile of the AI system, all organizations should have baseline practices in place, including:
- Clear AI use policies outlining where and how AI can be used
- Risk assessments for AI tools and models
- Human oversight mechanisms for all use cases especially high-risk AI decisions
- Bias and performance testing on a recurring basis
- Visibility into third-party AI tools used across the business
- Training and awareness programs for employees
Ultimately, responsible AI requires intentionality: knowing what AI is being used, how it’s being applied, and what controls are needed to ensure it creates value without introducing unintended harm.
Want to Learn More?
If you missed the webinar or would like to discuss how investors and portfolio companies can strengthen their approach to AI governance, our team would be happy to connect.
You can also access the full webinar recording here to watch on your own time.
Reach out to us to continue the conversation.

