Your AIQ Is Too Low — Here's What to Do About It
Bob Pulver draws on 22 years at IBM — including seven years at IBM Research where Watson was born — to explain why every organization needs to elevate its AIQ before deploying AI at scale.
Get this in your inbox every week. Join 26,000+ subscribers.
Subscribe →
Bob Pulver draws on 22 years at IBM — including seven years connecting C-suite clients to IBM Research where Watson was born — to explain why every organization needs to elevate its AIQ before deploying AI at scale.
What IBM's Innovation Lab Taught Bob About AI Adoption
Before anyone was talking about "AI transformation," Bob Pulver was living it inside IBM. He spent 22 years there bouncing across customer service, process re-engineering, SAP implementations, program management, and a chief of staff role shadowing a senior executive across global supply chain transformation. But the real inflection point came when he chose a research lab over another people management role.
For seven years, Bob connected C-suite clients to IBM Research in Westchester County, New York — the campus where Watson was introduced. He played Jeopardy against Watson in the mock studio that sat in his office. He watched cognitive computing evolve from a research curiosity into a commercial product. And he led an entrepreneurial program designed to get the average IBM employee — who had no idea what cognitive computing was — to understand it, experiment with it, and build with it.
"You had to take them on this journey with you. Here's some education. Now that you know what it can do, what can we do with it? That's where you get the creativity and the curiosity to start to explore." — Bob Pulver
That internal program at IBM — complete with MOOCs, crowdfunding, team-building, MVPs, and shark tank pitches — is exactly the framework Bob believes every organization needs to adopt for generative AI today. The technology is different. The human adoption challenge is identical.
Why "AIQ" Matters More Than AI Tools
Bob coined the term AIQ — AI Intelligence Quotient — to capture something the industry keeps getting wrong. Companies are rushing to deploy AI tools without first assessing whether their people are ready to use them. The result is frustration, distrust, and wasted investment.
The problem isn't that people are resistant. It's that everyone is starting from a different place. Some are jumping into Claude Code and Replit. Others are still just replacing Google with ChatGPT — which, as Bob points out, isn't even scratching the surface of what AI can do. And some are paralyzed entirely, worried about their job, their kids' careers, or whether they're too close to retirement to bother learning.
"We have to acknowledge everyone's got a different comfort level. Everyone's got some anxiety about some aspect of this — whether it's fear for their own job or their kids who are going to expensive college and come out and not have a job." — Bob Pulver
Bob's approach: assess individual AI readiness — literacy, comfort, and maturity — then build personalized learning journeys. The same way L&D has always worked, but applied to the most consequential technology shift in a generation. Aggregate the individual assessments and you can see patterns across teams, departments, and the entire organization. Then you can make targeted investments instead of blanket mandates.
Responsible Design, Not Just Responsible Use
Most of the conversation around AI responsibility focuses on use: don't upload PII into public models, don't trust outputs blindly, change your privacy settings. Bob pushes further. It's not just about responsible use — it's about responsible design.
"This isn't just about responsible use. It's about responsible design — because we are all builders now." — Bob Pulver
With no-code tools like Base44 and Lovable, and AI-assisted development environments, the barrier to building software has collapsed. That means more people are creating AI-powered tools and workflows than ever before — many without formal training in security, data governance, or bias mitigation. The question isn't just "can I trust this AI?" It's "can I trust the person who built this AI workflow?"
For enterprise leaders, this reframes the entire AI governance conversation. You're not just governing a set of vendor tools. You're governing an ecosystem of internal builders who are creating AI-powered processes across every function.
HR Is the Final Frontier — And That's a Problem
Tony Buffum raised a hypothesis during the conversation: HR is always the last function to adopt new technology. Bob agreed, but with nuance. The risk is genuinely higher in HR — you're dealing with people's careers, compensation, and livelihoods. The Eightfold class action around data consent is a live example of what happens when AI is deployed in talent acquisition without sufficient guardrails.
But the bigger issue is structural. HR still operates as a cost center in most organizations, and the CHRO often lacks the partnership with the CIO or Chief Data Officer needed to build the business case for AI investment in talent. Bob argues that talent acquisition should be positioned as a growth engine — look at the intellectual capital we're bringing in — not just a cost line to optimize.
"If you're just looking for efficiency gains and cost savings, you're going to get diminishing returns. You've got to say, where can AI enhance and augment human judgment, human creativity, human capability, so that we can scale what you as an individual or as a team are capable of." — Bob Pulver
The organizations that get this right will make their people feel like they have superpowers. The ones that don't will automate the easy stuff, cut headcount, and wonder why their competitive advantage evaporated.
The Bottom Line
AI adoption isn't a technology problem. It's a human readiness problem. Bob Pulver has watched this movie before — at IBM, with cognitive computing, with innovation management programs that got defunded because leadership didn't have the patience to see them through. The organizations that win the AI era won't be the ones that deploy the most agents. They'll be the ones that elevated their people's AIQ first, built trust into their systems, and treated AI as a capability multiplier rather than a cost reduction tool.
As Bob puts it: don't deploy AI wherever you can. Deploy it where you should.
About Bob Pulver
Bob Pulver is the founder of Elevate Your AIQ, where he helps organizations assess and elevate their AI readiness across every level — from individual contributors to the C-suite. A 22-year IBM veteran who spent seven years at IBM Research during the birth of Watson, Bob brings deep experience in innovation management, cognitive computing, and human-centered AI adoption to the talent transformation space.
Listen to the full episode: Human Cloud Podcast on Spotify
This article was adapted from the Human Cloud Podcast. Subscribe wherever you get your podcasts.
Get insights like this every week
Join 26,000+ leaders staying ahead of the flexible talent market.
Subscribe Now →