Enterprise AI Adoption: From Pilots to Real Business Value

Prominent tech leaders talk about how they’re deploying AI

2025 was the year of enterprise AI adoption, with surveys showing that many large companies now use AI in at least one business function. But questions remain: Are companies using it in meaningful ways, and generating value? And are employees gravitating toward it? Several high-profile reports published this year found that the vast majority of AI pilots stall or fail, and that most organizations see little measurable return on their AI investments.

To explore the gap between adoption and impact, we spoke with three Bain Capital Ventures CTO Advisory Board Members: Franziska Bell, Chief Data and AI Officer at Ford; Sandeep Chouksey, CTO and AI Officer at Mammoth Brands; and Ben Kus, CTO at Box. Each is at the center of enterprise AI decisionmaking, with firsthand exposure to both the promise and the friction shaping real-world deployments.

Despite operating in different industries, their perspectives converged on a key point: AI usage inside their organizations is real and persistent, even if scaling is uneven. Some already see significant value, while others argue that what looks like slow progress is a necessary period of experimentation — one that has been widely underestimated given the disruptive nature of the technology.

Getting employees to use AI

Fran: Adoption at Ford has occurred quite naturally across the organization. We have about 50,000 people actively using our internal platform, Ford LLM, weekly. I lead a multidisciplinary team of technologists and data scientists that work hand in hand with vehicle designers, product managers, engineering, and manufacturing teams. The group is deeply embedded within business units, focused on understanding employees’ day-to-day work and applying AI in practical ways that both improve how teams operate and inform broader business strategy.

Sandeep: We've created a culture of curiosity at Mammoth Brands and that ethos has certainly informed how we approach and experiment with AI. While we're always intentional about the tools we use, we've set a strategy around AI pilots that encourage our team to test-and-learn with agility with a wide range of partners. Enabling that means we're open to investing in trials and fast-tracking legal and security review so our team can quickly learn what platforms deliver the most value. We also prioritize knowledge sharing—we have a task force of AI champions that meets to discuss learnings, best practices, and how to drive further adoption within their functions, and regularly have team members share use cases

Ben: At Box we began by educating our teams with a company-wide AI certification course so all employees at all levels could strategically experiment with and use AI within their job functions so they can see what sticks. Our leadership team doesn’t try to dictate what every employee should do, rather we provide access to the tools, some basic background on how they work, and what’s allowed with the underlying data. Then we let them figure it out. Additionally, we heavily encouraged the use of our own AI tools by enabling Boxers to share how they’re successfully using products to enhance their day-to-day work.

Successful AI use cases

Fran: We have a dozen of what we call “AI Big Bets,” each aligned with Ford’s overall strategies. Our supply chain risk-assist AI is a multi-agent system for early identification of risks in Ford’s large and complex supply chain. It lets us work with our supply chain partners to mitigate issues or delays before they escalate so we can deliver cars on time.

In the vehicle design process, generative AI capabilities are helping our designers go from a manual sketch to 2D or 3D renderings of vehicle interior and exteriors with the push of a button. Previously, this took hours. For vehicle testing, proprietary models simulate aerodynamic drag calculations that used to take 16 to 18 hours. Now we get comparable accuracy in seconds.

Sandeep: Our analytics and insights team has found significant value in tools that sit on top of our data warehouse. The ability to ask a question like "How is our revenue trending over the last six months" does more than create efficiencies, it democratizes who can access and interpret data. Another great example of valuable data automation comes from our supply chain team. They use a GenAI tool that streamlines their data pipeline and workflow creation, including parsing out unstructured data that sits in hundreds of emails and PDFs. This lets them make faster, more informed decisions, and partner more productively with suppliers and manufacturers.

Ben: We gave the whole company the earliest form of our Box agents for unstructured data and held a hackathon. Interestingly, some of the teams that embraced it the most— procurement, compliance, and audit — all have precise jobs. One team automated audit fact-finding, which they identified as the most challenging and time consuming parts of the job. Now AI can look through immense amounts of data to quickly provide answers and actionable insights.

Assessing and measuring AI value

Fran: Our “AI Big Bets” all have stringent financial analysis, and already have outsized value. But our success metrics aren’t just financial. We also look at agility and speed, and want to free up time for our subject matter experts so they can focus on what matters most.

Sandeep: We piloted over 30 new AI tools in 2025, which surpassed the goal we set at the start of the year. Many of them failed, and that's by design. Our priority was to encourage rapid learning without the immediate pressure of ROI. Programs that prove useful enough to make it into production are then integrated into departmental budgets, which acts as a natural forcing function for teams to vet a tool’s ROI and take ownership of its value.

Ben: I’m all for measuring ROI, but with AI still evolving at such a rapid pace, it’s foolish to overemphasize it at this time. We try not to say, “You spent three months and haven’t proven any value, so the technology or program is cut.” There will be time for that critical lens as things mature, but for right now, experimentation is key for success.

Acquiring AI solutions and important qualities for vendors

Fran: We try to buy solutions off the shelf. Cybersecurity is of immense importance during procurement, and all companies, no matter their size, must fulfill our security needs. As a big Google shop, our infrastructure is predominantly on its cloud platform, so interoperability with the Google ecosystem is another important component. We prefer to work with medium-to-larger companies, but also work with smaller vendors that provide innovations that sometimes outperform larger companies.

Sandeep: We don’t have a preference for existing software vendors and are willing to work with smaller companies that might be moving faster on AI. All those 30 tools we procured were from vendors, with two-thirds from companies with fewer than 100 employees. We’re not necessarily looking for a 100% finished product. We love being a design partner, working directly on the product roadmap and having smart people help us build cool stuff. Startups also need to have basic security and compliance requirements, so that we know you are thoughtful about the way you’re handling our data.

Ben: We’ve had every flavor of AI technology over the last 12–18 months — our own tools, vendor solutions, open source, commercial. Change is so frequent that the normal rules around procuring a tool or software are no longer relevant. We don’t want to hear vendors say, “I have this one solution that figures everything out.” Instead, we like to hear, “This is a quality tool and worth the investment, money-wise, time-wise, effort-wise. And as things change, models get better and the sophistication and complexity of AI agents increases, we’ll respond quickly.”

If you are interested in learning more about our BCV Advisory Boards, please reach out to Nicole Falasco at nfalasco@baincapital.com.