A framework for measuring organizational learning. Five dimensions that capture whether your organization improves from what it does — or just does more of the same.
Most organizations collect data obsessively but learn almost never. They run experiments continuously and improve by accident.
TLQ — The Learning Quotient — measures the five dimensions that determine whether an organization actually learns: Activity, Output, Learning, Resources, and Environment.
AOLRE. The score that tells you whether your organization is getting smarter — or just getting busier.
The idea is 35 years old. The ability to measure it is new.
Peter Senge described the learning organization — companies that continuously transform. The idea was embraced. Nobody could measure it.
Organizations became excellent at recording what happened. CRM, ERP, BI dashboards. Data proliferated. Learning still couldn't be measured.
Five dimensions. One score. Finally, a way to measure whether your organization is learning — not just executing. AOLRE changes everything.
Score each dimension 1–5. Get your TLQ. Know exactly what to build first.
Take the TLQ Audit →Activity happens. Output gets measured. But the connection between them — and the learning that should result — is almost never captured.
Consider a sales team. They research accounts, build contact lists, collect signals, create messaging, execute campaigns. That's Activity. Then they measure: email opens, replies, conversations, meetings, proposals, closed deals. That's Output.
But here's what almost never happens: connecting specific Activities to specific Outputs in a way that changes how the next cycle runs. The experiments are always running — different research approaches, different email formats, different messaging angles. The results are almost never captured.
"The gap between Activity and Output is where organizational learning should live. In most organizations, it's empty."
| Failure Mode | What It Looks Like | The Consequence |
|---|---|---|
| Activity Amnesia | Teams do work but don't record what they actually did — just what they produced. | No way to know which activities drive which outputs. Every cycle starts from scratch. |
| Output Obsession | All measurement focuses on final outcomes. Intermediate signals are ignored. | By the time you know something didn't work, it's too late to understand why. |
| Learning Leakage | Individual contributors learn, but the organization doesn't capture it. | When people leave, the knowledge leaves. New hires start from zero. |
| Resource Blindness | Budget allocation is disconnected from Activity and Output data. | Investment decisions based on politics, not evidence. Waste compounds. |
| Environment Ignorance | External conditions aren't tracked. Same playbook regardless of context. | When the market shifts, the organization doesn't adapt. Same activity, worse output. |
AOLRE — Activity, Output, Learning, Resources, Environment. Each dimension captures a different aspect of whether your organization is learning from what it does.
Activity is everything your organization does before Output appears. Not the results — the work that produces the results. Most organizations track what they achieved without tracking what they actually did to achieve it.
Activity capture means recording the specific actions taken, the methods used, the variations tried. Without this, you have no way to connect cause to effect.
You know what you achieved but not how. Every success is unrepeatable. Every failure is unexplained. Improvement is accidental.
Account research approaches, contact list generation methods, signal collection sources, account strategy frameworks, messaging variants, campaign execution patterns — all the work before anyone responds.
Output is what the Activity produces — but not just the final number. Every stage of the process generates output. Most organizations only measure the end: revenue, deals closed, projects completed. They miss the intermediate signals that explain why.
Comprehensive Output tracking means measuring every stage of the funnel, every step of the process. The intermediate outputs are where the learning signal lives.
You see the end result but not the path. A bad quarter could mean bad Activity or bad conversion at any stage — you have no way to know which.
Email opens → replies → positive replies → conversations → phone calls → meetings booked → meetings held → proposals sent → negotiations → contract signatures → closed/won. Each stage is a learning opportunity.
Learning is the connection between Activity and Output — the feedback loop that should improve the next cycle. Inside every Activity/Output pair, experiments run simultaneously. Learning is whether those results are recorded and change what happens next.
This is where most organizations fail completely. They have Activity. They have Output. They don't have the infrastructure to connect them in a way that compounds.
Experiments run but results aren't captured. When someone leaves, their knowledge leaves. New hires start from scratch. The organization never gets smarter.
Different research approaches, email formats, subject line tones and lengths, messaging angles, follow-up timing — all experiments running simultaneously, almost never captured, rarely informing the next campaign.
Resources are what enables Activity — budget, tools, training, intelligence. Resource allocation is almost never connected to Activity or Output data. Decisions about where to invest are made based on politics, precedent, or intuition rather than evidence.
Linking Resources to Activity and Output means knowing which investments actually drive which results. It's the foundation for intelligent allocation.
You spend money but can't prove what it produces. Budget fights are political, not analytical. Waste compounds because no one can see it.
Budget allocation, prospecting tool costs, data vendor subscriptions, coaching and training programs, intent signal subscriptions, market intelligence services — all investments that should be tied to Activity and Output.
Environment is everything outside your control that affects what your Activity can produce. The same Activity produces different Output in different Environments. Without tracking Environment, you can't interpret your results correctly.
Environment capture means recording the external conditions that shape your outcomes — so you know whether a change in Output reflects a change in Activity effectiveness or a change in conditions.
You blame your team when the market shifted. You credit your strategy when you got lucky. Your interpretation of results is systematically wrong.
Market conditions, buyer budget cycles, competitive moves, regulatory changes, economic indicators, industry sentiment — all the external factors that make the same outreach produce different results.
"AOLRE captures the complete picture: what you did (Activity), what happened (Output), what you learned (Learning), what you invested (Resources), and what you couldn't control (Environment). Miss any one, and your understanding is incomplete."
The five dimensions are universal. What counts as Activity, Output, Learning, Resources, and Environment varies by industry.
"The organizations that learn fastest will compound in ways static organizations cannot match."
Enterprise is where TLQ is easiest to measure because the Output signal is clearest: revenue. Every function — sales, marketing, operations, product — can map its Activity to measurable outcomes.
The competitive advantage of TLQ in enterprise is speed of learning. Two organizations with identical resources: one learns from every cycle, one doesn't. After a year, they're not even competing in the same league.
| Activity | Sales motions, marketing campaigns, product development work, operational processes |
| Output | Pipeline, conversion rates, revenue, customer retention, feature adoption |
| Learning | What worked, what didn't, why — captured and applied to next cycle |
| Resources | Budget, headcount, tools, training, external services |
| Environment | Market conditions, competitive moves, economic cycles, buyer sentiment |
"The sales team runs the same playbook for three years. Market shifted 18 months ago. Activity stays constant. Output declines. No one can explain why because no one tracked the Environment. Resources get cut — making the problem worse."
"Citizens deserve to know not just what government did, but whether it learned anything."
Government faces unique TLQ challenges: Output is harder to measure, Learning is harder to implement (bureaucratic constraints), and Environment includes political dynamics that private sector doesn't face.
But the need is acute. Public programs run for decades without Learning infrastructure. The same interventions are tried, fail, and tried again because no one captured why they failed.
| Activity | Policy implementation, service delivery operations, regulatory enforcement |
| Output | Citizen outcomes, compliance rates, service quality metrics, program participation |
| Learning | What interventions worked, for whom, under what conditions |
| Resources | Budget allocation, staffing, technology investments, contractor spend |
| Environment | Political dynamics, economic conditions, demographic shifts, legal constraints |
"A jobs program runs for 15 years. Outcomes are measured at the program level, never connected to specific Activities. When it's finally evaluated, no one can say which components worked. The next administration starts over."
"In healthcare, the gap between Activity and Output is measured in lives."
Healthcare has strong Output measurement (patient outcomes) but weak Activity capture (clinical processes are poorly documented) and almost no Learning infrastructure (insights stay with individual clinicians).
The stakes make TLQ urgent. The same clinical errors repeat across institutions because Learning isn't captured. The same Resource allocation mistakes happen because no one connects spending to outcomes.
| Activity | Clinical interventions, care protocols, diagnostic processes, treatment decisions |
| Output | Patient outcomes, complication rates, readmission rates, recovery times |
| Learning | Which interventions work for which patients under which conditions |
| Resources | Staffing, equipment, pharmaceuticals, facilities, training |
| Environment | Patient population, disease prevalence, regulatory requirements, payer dynamics |
"A surgical technique produces better outcomes at one hospital. No one captures why. The surgeon retires. The knowledge disappears. Other hospitals never learn the approach existed."
"Educational institutions are supposed to be learning organizations. Most aren't."
Education is paradoxical: institutions dedicated to learning often have the weakest organizational learning. Teaching Activity is poorly captured. Student Output beyond grades is rarely measured. Learning about what works stays with individual teachers.
TLQ in education means treating the institution itself as a learner — not just the students.
| Activity | Teaching methods, curriculum delivery, student support, assessment practices |
| Output | Student learning outcomes, skill acquisition, graduation rates, career readiness |
| Learning | What teaching approaches work for which students in which contexts |
| Resources | Faculty time, facilities, technology, support services, materials |
| Environment | Student demographics, economic conditions, policy requirements, cultural factors |
"A teacher develops an approach that dramatically improves outcomes for struggling students. She retires after 30 years. No one documented her methods. The knowledge disappears. Students next year start over with whatever the new teacher brings."
The five dimensions apply everywhere. Here's what they look like in six core functions — and what becomes possible when the loop closes.
| Activity | Account research, contact list generation, signal collection, account strategy, messaging development, campaign execution, follow-up sequences |
| Output | Opens → replies → positive replies → conversations → calls → meetings → proposals → negotiations → signatures → closed/won |
| Learning | Which research approaches work, which messaging angles convert, which sequences drive responses, which objection handling closes |
| Resources | Prospecting tools, data vendors, intent signals, coaching programs, CRM, dialers, content library |
| Environment | Market conditions, buyer budget cycles, competitive moves, economic indicators, industry sentiment |
| Activity | Content creation, campaign design, channel selection, audience targeting, creative development, distribution execution |
| Output | Impressions → clicks → visits → form fills → MQLs → SQLs → pipeline influenced → revenue attributed |
| Learning | Which content resonates, which channels perform, which audiences convert, which messages drive action |
| Resources | Ad spend, content tools, marketing automation, analytics platforms, creative resources, agency fees |
| Environment | Competitive messaging, platform algorithm changes, audience fatigue, market trends, regulatory constraints |
| Activity | Onboarding programs, health monitoring, QBRs, feature adoption campaigns, risk intervention, expansion conversations |
| Output | Time-to-value → adoption rates → health scores → NPS → retention → expansion → referrals |
| Learning | Which onboarding paths accelerate value, which interventions prevent churn, which triggers predict expansion |
| Resources | CSM headcount, onboarding tools, health platforms, training content, community infrastructure |
| Environment | Customer company health, competitive alternatives, economic pressure, stakeholder changes, product roadmap |
| Activity | Requirement analysis, architecture design, implementation, code review, testing, deployment, documentation |
| Output | Stories completed → PRs merged → deployments → bugs found → incidents → feature adoption → user satisfaction |
| Learning | Which estimation approaches work, which architecture patterns scale, which review practices catch bugs, which deployments fail |
| Resources | Developer time, infrastructure costs, tooling subscriptions, training, technical debt budget |
| Environment | Technical dependencies, platform changes, security landscape, talent market, user behavior shifts |
| Activity | Data collection, model building, scenario analysis, forecast generation, report creation, stakeholder communication |
| Output | Data quality → model accuracy → forecast precision → insight adoption → decision impact → value delivered |
| Learning | Which data sources prove reliable, which models predict well, which presentations drive action |
| Resources | Analyst time, data tools, BI platforms, financial systems, external data subscriptions |
| Environment | Regulatory changes, market volatility, stakeholder priorities, data availability, economic conditions |
| Activity | Ticket triage, issue diagnosis, resolution execution, escalation management, knowledge documentation, proactive outreach |
| Output | Response time → resolution time → first-contact resolution → CSAT → ticket deflection → issue prevention |
| Learning | Which resolution paths work fastest, which issues recur, which documentation deflects tickets |
| Resources | Agent headcount, ticketing tools, knowledge base, AI assistants, training programs |
| Environment | Product changes, seasonal volume, customer segment shifts, competitive support levels |
The convictions behind the TLQ Framework — why measuring organizational learning matters, and why now.
We believe organizations should learn from everything they do. Not occasionally. Not when someone remembers. Every cycle.
This is not a new idea. Peter Senge described the learning organization in 1990. Everyone agreed it was right. No one could measure it. What you can't measure, you can't manage. So the learning organization remained an aspiration — a philosophy, not an operating system.
TLQ changes that. Five dimensions. One score. For the first time, you can measure whether your organization is learning — and know exactly where it isn't.
TLQ is an open framework. No certification required. No licensing. No gatekeepers. Take the audit. Score your organization. See where you stand.
The organizations that measure their learning will outpace the ones that don't. Not because measurement is magic — but because you can only improve what you can see.
"Help organizations learn."
— Mission
Five forces converging to make organizational learning measurement possible — and necessary.
Modern tools log everything. We now have the raw data to know what organizations actually do — not just what they report doing.
CRM, analytics, BI tools — organizations know their outcomes. The gap is connecting Activity to Output, not measuring Output alone.
Finding the signal in Activity/Output data used to require armies of analysts. AI makes it possible to detect learning opportunities at scale.
Organizations that learn are pulling ahead. The compounding disadvantage of static organizations is now measurable. The stakes are clear.
AOLRE provides the structure. Five dimensions, each measurable, together capturing the complete picture of organizational learning. No more philosophy without measurement.
This was defensible when learning couldn't be measured. It's not defensible now. The organizations measuring their TLQ are pulling ahead. Ignoring the metric doesn't make the gap disappear.
Most organizations are here. They measure Output. Maybe some Activity. Learning, Resources, Environment? Almost never connected. Partial measurement produces partial understanding.
Complete picture. Real feedback loops. Every cycle improves the next. This is the learning organization — not as aspiration, but as operating system.
"The question isn't whether to measure organizational learning. It's whether you measure it before or after your competitors do."
Five dimensions. Score each 1–5. Your total is your Learning Quotient — and a map of where to focus.
Do you systematically capture what your organization does — the specific actions, methods, and variations — or just what it achieves?
Do you measure outcomes at every stage of your processes — or just the final results?
When Activity produces Output, is that connection captured? Does what you learn change the next cycle?
Can you connect your investments — budget, tools, training — to the Activity they enable and the Output they produce?
Do you track the external conditions — market, competition, context — that shape what your Activity can produce?
An open framework developed by practitioners committed to measuring and improving organizational learning.
Organizations implementing TLQ to measure and improve their learning capacity. The proof that the framework works comes from real-world adoption.
Measuring learning reveals gaps. Some organizations would rather not know. Adopters must be ready to see — and act on — uncomfortable truths.
Real-world validation and sector-specific adaptations that prove TLQ works in practice.
Professionals who help organizations assess and improve their TLQ scores. They translate framework into practice.
Consultants benefit from complexity. TLQ is simple by design. The tension is between billable hours and framework fidelity.
Implementation expertise and cross-industry pattern recognition.
Companies building tools that capture Activity, measure Output, and enable Learning infrastructure. The technology layer that makes TLQ measurable at scale.
Shipping features is easier than shipping learning infrastructure. The tension is between velocity and measurement capability.
The infrastructure that makes TLQ measurement automatic rather than manual.
Scholars studying organizational learning, validating TLQ against empirical evidence, and extending the framework through rigorous research.
Academic rigor takes time. Practitioners need answers now. The tension is between certainty and utility.
Empirical validation and theoretical depth that strengthen the framework's foundation.
People inside organizations who champion TLQ adoption — often without formal authority. The internal advocates who make change happen.
Championing change is risky. If TLQ doesn't produce results, champions take the blame. The tension is between advocacy and career safety.
Ground-level adoption that proves TLQ works from the inside out.
People contributing to TLQ's development — writing documentation, building tools, creating sector-specific applications, improving the framework itself.
Open source work is often unpaid. The tension is between contribution and compensation.
The infrastructure, documentation, and tooling that makes TLQ accessible to everyone.
| # | Principle | Description |
|---|---|---|
| 01 | Open by Default | The framework is freely available. No licensing, no certification requirements. Take it and use it. |
| 02 | Evidence Over Opinion | Framework changes require evidence from real implementations. Theory is not enough. |
| 03 | Practitioner Voice | People implementing TLQ have the strongest voice in its development. Users lead. |
| 04 | Simplicity Preserved | Five dimensions. No more. Complexity is the enemy of adoption. |
| 05 | Continuous Learning | The framework practices what it preaches. We measure our own AOLRE. |