The TLQ Framework

The Learning Organization.

A framework for measuring organizational learning. Five dimensions that capture whether your organization improves from what it does — or just does more of the same.

5
Dimensions
25
Point Scale

Most organizations collect data obsessively but learn almost never. They run experiments continuously and improve by accident.

TLQ — The Learning Quotient — measures the five dimensions that determine whether an organization actually learns: Activity, Output, Learning, Resources, and Environment.

AOLRE. The score that tells you whether your organization is getting smarter — or just getting busier.

From Vision to Infrastructure

The idea is 35 years old. The ability to measure it is new.

1990
The Vision

Peter Senge described the learning organization — companies that continuously transform. The idea was embraced. Nobody could measure it.

2000–2020
The Data Era

Organizations became excellent at recording what happened. CRM, ERP, BI dashboards. Data proliferated. Learning still couldn't be measured.

2024+
The TLQ Era

Five dimensions. One score. Finally, a way to measure whether your organization is learning — not just executing. AOLRE changes everything.

Assessment
Know your Learning Quotient.

Score each dimension 1–5. Get your TLQ. Know exactly what to build first.

Take the TLQ Audit →
The Problem

Organizations execute continuously. They learn almost never.

Activity happens. Output gets measured. But the connection between them — and the learning that should result — is almost never captured.

Old Frame
"We need better execution."
New Frame
"We need to learn from execution."

Consider a sales team. They research accounts, build contact lists, collect signals, create messaging, execute campaigns. That's Activity. Then they measure: email opens, replies, conversations, meetings, proposals, closed deals. That's Output.

But here's what almost never happens: connecting specific Activities to specific Outputs in a way that changes how the next cycle runs. The experiments are always running — different research approaches, different email formats, different messaging angles. The results are almost never captured.

"The gap between Activity and Output is where organizational learning should live. In most organizations, it's empty."

What breaks without Learning infrastructure
Failure Mode What It Looks Like The Consequence
Activity Amnesia Teams do work but don't record what they actually did — just what they produced. No way to know which activities drive which outputs. Every cycle starts from scratch.
Output Obsession All measurement focuses on final outcomes. Intermediate signals are ignored. By the time you know something didn't work, it's too late to understand why.
Learning Leakage Individual contributors learn, but the organization doesn't capture it. When people leave, the knowledge leaves. New hires start from zero.
Resource Blindness Budget allocation is disconnected from Activity and Output data. Investment decisions based on politics, not evidence. Waste compounds.
Environment Ignorance External conditions aren't tracked. Same playbook regardless of context. When the market shifts, the organization doesn't adapt. Same activity, worse output.
The five dimensions of TLQ exist because each failure mode needs its own measurement. You can't fix what you can't see.
See the TLQ Framework → Take the Audit
TLQ Framework

Five dimensions of organizational learning.

AOLRE — Activity, Output, Learning, Resources, Environment. Each dimension captures a different aspect of whether your organization is learning from what it does.

A · O · L · R · E — click to expand
A
Activity
The deliberate work done to generate results
Expand ▼

Activity is everything your organization does before Output appears. Not the results — the work that produces the results. Most organizations track what they achieved without tracking what they actually did to achieve it.

Activity capture means recording the specific actions taken, the methods used, the variations tried. Without this, you have no way to connect cause to effect.

What to Measure
Action logging — Are specific activities recorded with enough detail to analyze?
Method variation — Are different approaches being tried and tracked?
Time allocation — Do you know where effort actually goes?
Without This

You know what you achieved but not how. Every success is unrepeatable. Every failure is unexplained. Improvement is accidental.

The Sales Illustration

Account research approaches, contact list generation methods, signal collection sources, account strategy frameworks, messaging variants, campaign execution patterns — all the work before anyone responds.

O
Output
The measurable results at every stage
Expand ▼

Output is what the Activity produces — but not just the final number. Every stage of the process generates output. Most organizations only measure the end: revenue, deals closed, projects completed. They miss the intermediate signals that explain why.

Comprehensive Output tracking means measuring every stage of the funnel, every step of the process. The intermediate outputs are where the learning signal lives.

What to Measure
Stage metrics — Are you measuring every step, not just the final outcome?
Conversion rates — Do you know the drop-off between each stage?
Quality signals — Beyond quantity, are you capturing quality indicators?
Without This

You see the end result but not the path. A bad quarter could mean bad Activity or bad conversion at any stage — you have no way to know which.

The Sales Illustration

Email opens → replies → positive replies → conversations → phone calls → meetings booked → meetings held → proposals sent → negotiations → contract signatures → closed/won. Each stage is a learning opportunity.

L
Learning
What is captured and fed back
Expand ▼

Learning is the connection between Activity and Output — the feedback loop that should improve the next cycle. Inside every Activity/Output pair, experiments run simultaneously. Learning is whether those results are recorded and change what happens next.

This is where most organizations fail completely. They have Activity. They have Output. They don't have the infrastructure to connect them in a way that compounds.

What to Measure
Experiment capture — Are variations being tracked and compared?
Insight documentation — When something works or fails, is it recorded?
Feedback velocity — How fast do learnings change the next cycle?
Without This

Experiments run but results aren't captured. When someone leaves, their knowledge leaves. New hires start from scratch. The organization never gets smarter.

The Sales Illustration

Different research approaches, email formats, subject line tones and lengths, messaging angles, follow-up timing — all experiments running simultaneously, almost never captured, rarely informing the next campaign.

R
Resources
What enables the work
Expand ▼

Resources are what enables Activity — budget, tools, training, intelligence. Resource allocation is almost never connected to Activity or Output data. Decisions about where to invest are made based on politics, precedent, or intuition rather than evidence.

Linking Resources to Activity and Output means knowing which investments actually drive which results. It's the foundation for intelligent allocation.

What to Measure
Cost attribution — Can you connect spending to specific Activities?
ROI tracking — Do you know which Resources drive which Outputs?
Utilization rates — Are Resources being used effectively?
Without This

You spend money but can't prove what it produces. Budget fights are political, not analytical. Waste compounds because no one can see it.

The Sales Illustration

Budget allocation, prospecting tool costs, data vendor subscriptions, coaching and training programs, intent signal subscriptions, market intelligence services — all investments that should be tied to Activity and Output.

E
Environment
External conditions shaping what Activity can produce
Expand ▼

Environment is everything outside your control that affects what your Activity can produce. The same Activity produces different Output in different Environments. Without tracking Environment, you can't interpret your results correctly.

Environment capture means recording the external conditions that shape your outcomes — so you know whether a change in Output reflects a change in Activity effectiveness or a change in conditions.

What to Measure
Market conditions — Are you tracking the external factors that affect your results?
Competitive dynamics — Do you know what competitors are doing?
Context changes — When conditions shift, are you detecting it?
Without This

You blame your team when the market shifted. You credit your strategy when you got lucky. Your interpretation of results is systematically wrong.

The Sales Illustration

Market conditions, buyer budget cycles, competitive moves, regulatory changes, economic indicators, industry sentiment — all the external factors that make the same outreach produce different results.

"AOLRE captures the complete picture: what you did (Activity), what happened (Output), what you learned (Learning), what you invested (Resources), and what you couldn't control (Environment). Miss any one, and your understanding is incomplete."

Calculate Your TLQ Score → See Sector Applications
Sector Applications

AOLRE applies to every function. The specifics differ by sector.

The five dimensions are universal. What counts as Activity, Output, Learning, Resources, and Environment varies by industry.

For-Profit Enterprise
Government
Healthcare
Education
For-Profit Enterprise
Revenue, Competitive Advantage, Operational Efficiency

"The organizations that learn fastest will compound in ways static organizations cannot match."

Enterprise is where TLQ is easiest to measure because the Output signal is clearest: revenue. Every function — sales, marketing, operations, product — can map its Activity to measurable outcomes.

The competitive advantage of TLQ in enterprise is speed of learning. Two organizations with identical resources: one learns from every cycle, one doesn't. After a year, they're not even competing in the same league.

Stakes
Revenue compounds when learning compounds
Competitive advantage shifts to the fastest learner
Resource allocation without TLQ is guesswork at scale
ActivitySales motions, marketing campaigns, product development work, operational processes
OutputPipeline, conversion rates, revenue, customer retention, feature adoption
LearningWhat worked, what didn't, why — captured and applied to next cycle
ResourcesBudget, headcount, tools, training, external services
EnvironmentMarket conditions, competitive moves, economic cycles, buyer sentiment
Low TLQ Scenario

"The sales team runs the same playbook for three years. Market shifted 18 months ago. Activity stays constant. Output declines. No one can explain why because no one tracked the Environment. Resources get cut — making the problem worse."

Government
Public Outcomes, Accountability, Service Delivery

"Citizens deserve to know not just what government did, but whether it learned anything."

Government faces unique TLQ challenges: Output is harder to measure, Learning is harder to implement (bureaucratic constraints), and Environment includes political dynamics that private sector doesn't face.

But the need is acute. Public programs run for decades without Learning infrastructure. The same interventions are tried, fail, and tried again because no one captured why they failed.

Stakes
Taxpayer resources allocated without evidence
Policy failures repeat across administrations
Public trust erodes when government can't demonstrate learning
ActivityPolicy implementation, service delivery operations, regulatory enforcement
OutputCitizen outcomes, compliance rates, service quality metrics, program participation
LearningWhat interventions worked, for whom, under what conditions
ResourcesBudget allocation, staffing, technology investments, contractor spend
EnvironmentPolitical dynamics, economic conditions, demographic shifts, legal constraints
Low TLQ Scenario

"A jobs program runs for 15 years. Outcomes are measured at the program level, never connected to specific Activities. When it's finally evaluated, no one can say which components worked. The next administration starts over."

Healthcare
Patient Outcomes, Clinical Quality, Care Efficiency

"In healthcare, the gap between Activity and Output is measured in lives."

Healthcare has strong Output measurement (patient outcomes) but weak Activity capture (clinical processes are poorly documented) and almost no Learning infrastructure (insights stay with individual clinicians).

The stakes make TLQ urgent. The same clinical errors repeat across institutions because Learning isn't captured. The same Resource allocation mistakes happen because no one connects spending to outcomes.

Stakes
Patient safety depends on learning from every case
Clinical knowledge locked in individual practitioners
Resource allocation disconnected from outcome data
ActivityClinical interventions, care protocols, diagnostic processes, treatment decisions
OutputPatient outcomes, complication rates, readmission rates, recovery times
LearningWhich interventions work for which patients under which conditions
ResourcesStaffing, equipment, pharmaceuticals, facilities, training
EnvironmentPatient population, disease prevalence, regulatory requirements, payer dynamics
Low TLQ Scenario

"A surgical technique produces better outcomes at one hospital. No one captures why. The surgeon retires. The knowledge disappears. Other hospitals never learn the approach existed."

Education
Student Outcomes, Learning Effectiveness, Institutional Quality

"Educational institutions are supposed to be learning organizations. Most aren't."

Education is paradoxical: institutions dedicated to learning often have the weakest organizational learning. Teaching Activity is poorly captured. Student Output beyond grades is rarely measured. Learning about what works stays with individual teachers.

TLQ in education means treating the institution itself as a learner — not just the students.

Stakes
Effective teaching practices don't spread
Curriculum decisions based on tradition, not evidence
Student outcomes vary wildly with no explanation why
ActivityTeaching methods, curriculum delivery, student support, assessment practices
OutputStudent learning outcomes, skill acquisition, graduation rates, career readiness
LearningWhat teaching approaches work for which students in which contexts
ResourcesFaculty time, facilities, technology, support services, materials
EnvironmentStudent demographics, economic conditions, policy requirements, cultural factors
Low TLQ Scenario

"A teacher develops an approach that dramatically improves outcomes for struggling students. She retires after 30 years. No one documented her methods. The knowledge disappears. Students next year start over with whatever the new teacher brings."

Functions

AOLRE mapped to every function.

The five dimensions apply everywhere. Here's what they look like in six core functions — and what becomes possible when the loop closes.

Sales
Marketing
Customer Success
Engineering
Finance
Support
Sales
Pipeline Generation to Closed Revenue
AOLRE Mapped
ActivityAccount research, contact list generation, signal collection, account strategy, messaging development, campaign execution, follow-up sequences
OutputOpens → replies → positive replies → conversations → calls → meetings → proposals → negotiations → signatures → closed/won
LearningWhich research approaches work, which messaging angles convert, which sequences drive responses, which objection handling closes
ResourcesProspecting tools, data vendors, intent signals, coaching programs, CRM, dialers, content library
EnvironmentMarket conditions, buyer budget cycles, competitive moves, economic indicators, industry sentiment
What 10x Looks Like
Every rep benefits from what the best rep learned — automatically
Messaging evolves based on response data, not intuition
Tool investments tie directly to pipeline impact
Market shifts are detected and playbooks adapt
The team gets smarter every quarter. New hires ramp in weeks, not months. The playbook is always current.
Marketing
Awareness to Qualified Pipeline
AOLRE Mapped
ActivityContent creation, campaign design, channel selection, audience targeting, creative development, distribution execution
OutputImpressions → clicks → visits → form fills → MQLs → SQLs → pipeline influenced → revenue attributed
LearningWhich content resonates, which channels perform, which audiences convert, which messages drive action
ResourcesAd spend, content tools, marketing automation, analytics platforms, creative resources, agency fees
EnvironmentCompetitive messaging, platform algorithm changes, audience fatigue, market trends, regulatory constraints
What 10x Looks Like
Content strategy driven by conversion data, not calendar
Channel mix optimizes based on actual attribution
Budget flows to what works — provably
Messaging adapts as market positioning shifts
Every campaign teaches the next one. Attribution is real. Budget conversations are evidence-based.
Customer Success
Onboarding to Expansion Revenue
AOLRE Mapped
ActivityOnboarding programs, health monitoring, QBRs, feature adoption campaigns, risk intervention, expansion conversations
OutputTime-to-value → adoption rates → health scores → NPS → retention → expansion → referrals
LearningWhich onboarding paths accelerate value, which interventions prevent churn, which triggers predict expansion
ResourcesCSM headcount, onboarding tools, health platforms, training content, community infrastructure
EnvironmentCustomer company health, competitive alternatives, economic pressure, stakeholder changes, product roadmap
What 10x Looks Like
Onboarding paths personalize based on what actually drives adoption
Churn risk detected before it's too late
Expansion timing driven by data, not gut
External risk factors visible and actionable
Retention improves systematically. Expansion becomes predictable. The best CSM's playbook becomes everyone's.
Engineering
Requirements to Shipped Value
AOLRE Mapped
ActivityRequirement analysis, architecture design, implementation, code review, testing, deployment, documentation
OutputStories completed → PRs merged → deployments → bugs found → incidents → feature adoption → user satisfaction
LearningWhich estimation approaches work, which architecture patterns scale, which review practices catch bugs, which deployments fail
ResourcesDeveloper time, infrastructure costs, tooling subscriptions, training, technical debt budget
EnvironmentTechnical dependencies, platform changes, security landscape, talent market, user behavior shifts
What 10x Looks Like
Estimation accuracy improves based on actual outcomes
Architecture decisions informed by what worked before
Infrastructure spend tied to actual usage patterns
External dependency risks visible early
Velocity is predictable. Quality is consistent. The team gets faster without burning out.
Financial Analysis
Data to Decision-Ready Insight
AOLRE Mapped
ActivityData collection, model building, scenario analysis, forecast generation, report creation, stakeholder communication
OutputData quality → model accuracy → forecast precision → insight adoption → decision impact → value delivered
LearningWhich data sources prove reliable, which models predict well, which presentations drive action
ResourcesAnalyst time, data tools, BI platforms, financial systems, external data subscriptions
EnvironmentRegulatory changes, market volatility, stakeholder priorities, data availability, economic conditions
What 10x Looks Like
Forecast accuracy improves with every cycle
Models evolve based on prediction outcomes
Reports drive decisions, not just inform
External signals integrated into planning
Forecasts become reliable. Budgets reflect reality. Finance becomes a strategic partner, not a reporting function.
Customer Support
Issue to Resolution to Prevention
AOLRE Mapped
ActivityTicket triage, issue diagnosis, resolution execution, escalation management, knowledge documentation, proactive outreach
OutputResponse time → resolution time → first-contact resolution → CSAT → ticket deflection → issue prevention
LearningWhich resolution paths work fastest, which issues recur, which documentation deflects tickets
ResourcesAgent headcount, ticketing tools, knowledge base, AI assistants, training programs
EnvironmentProduct changes, seasonal volume, customer segment shifts, competitive support levels
What 10x Looks Like
Resolution paths optimize based on what actually works
Recurring issues feed back to product
Knowledge base evolves from ticket patterns
Volume spikes predicted and staffed
Support becomes prevention. The same issue never surprises twice. Agents get better tools, not just more tickets.
Manifesto

What we believe about organizations and learning.

The convictions behind the TLQ Framework — why measuring organizational learning matters, and why now.

We believe organizations should learn from everything they do. Not occasionally. Not when someone remembers. Every cycle.

This is not a new idea. Peter Senge described the learning organization in 1990. Everyone agreed it was right. No one could measure it. What you can't measure, you can't manage. So the learning organization remained an aspiration — a philosophy, not an operating system.

TLQ changes that. Five dimensions. One score. For the first time, you can measure whether your organization is learning — and know exactly where it isn't.

Every organizational action is an experiment. The question is not whether experiments are running — they always are. The question is whether the results are captured.
Activity without Output measurement is motion without direction. Output without Activity tracking is results without explanation. Neither is learning.
Learning is infrastructure, not culture. You cannot will an organization to learn. You have to build the systems that capture what happened and feed it back.
Resources allocated without connection to Activity and Output are resources wasted. Every budget decision should be traceable to evidence.
Environment shapes everything. The same Activity produces different Output in different conditions. Ignoring Environment means misunderstanding your own results.

TLQ is an open framework. No certification required. No licensing. No gatekeepers. Take the audit. Score your organization. See where you stand.

The organizations that measure their learning will outpace the ones that don't. Not because measurement is magic — but because you can only improve what you can see.

"Help organizations learn."

— Mission

Join the Movement →
Why Now

The learning organization was described in 1990. It can be measured in 2024.

Five forces converging to make organizational learning measurement possible — and necessary.

01
Activity capture is finally possible

Modern tools log everything. We now have the raw data to know what organizations actually do — not just what they report doing.

02
Output measurement is standard

CRM, analytics, BI tools — organizations know their outcomes. The gap is connecting Activity to Output, not measuring Output alone.

03
AI makes pattern detection scalable

Finding the signal in Activity/Output data used to require armies of analysts. AI makes it possible to detect learning opportunities at scale.

04
The cost of not learning is visible

Organizations that learn are pulling ahead. The compounding disadvantage of static organizations is now measurable. The stakes are clear.

05
The framework exists

AOLRE provides the structure. Five dimensions, each measurable, together capturing the complete picture of organizational learning. No more philosophy without measurement.

Three ways organizations respond
Position 1: Ignore
"We're doing fine without measuring learning."

This was defensible when learning couldn't be measured. It's not defensible now. The organizations measuring their TLQ are pulling ahead. Ignoring the metric doesn't make the gap disappear.

Position 2: Measure Partially
"We track some of this."

Most organizations are here. They measure Output. Maybe some Activity. Learning, Resources, Environment? Almost never connected. Partial measurement produces partial understanding.

Position 3: Adopt AOLRE
"We measure all five dimensions."

Complete picture. Real feedback loops. Every cycle improves the next. This is the learning organization — not as aspiration, but as operating system.

"The question isn't whether to measure organizational learning. It's whether you measure it before or after your competitors do."

Take the TLQ Audit → See the Framework
TLQ AUDIT v1.0
Submit results to community registry →
Assessment

The TLQ Audit.

Five dimensions. Score each 1–5. Your total is your Learning Quotient — and a map of where to focus.

A · Dimension 01 / 05
Activity

Do you systematically capture what your organization does — the specific actions, methods, and variations — or just what it achieves?

Score:
1 = No activity logging · 5 = Complete action capture
O · Dimension 02 / 05
Output

Do you measure outcomes at every stage of your processes — or just the final results?

Score:
1 = Final outcomes only · 5 = Every stage measured
L · Dimension 03 / 05
Learning

When Activity produces Output, is that connection captured? Does what you learn change the next cycle?

Score:
1 = No feedback loop · 5 = Continuous learning capture
R · Dimension 04 / 05
Resources

Can you connect your investments — budget, tools, training — to the Activity they enable and the Output they produce?

Score:
1 = Spending disconnected · 5 = Full ROI traceability
E · Dimension 05 / 05
Environment

Do you track the external conditions — market, competition, context — that shape what your Activity can produce?

Score:
1 = Context ignored · 5 = Environment systematically tracked
Your TLQ Score
0 / 25

See the Framework →
Community

Building the learning organization together.

An open framework developed by practitioners committed to measuring and improving organizational learning.

Join Us
Become part of the movement.
Join the Community →
Who is building the TLQ Framework
Corporations & Enterprises
Adopters
Expand ▼
The Role

Organizations implementing TLQ to measure and improve their learning capacity. The proof that the framework works comes from real-world adoption.

Core Responsibilities
Implement AOLRE measurement in their context
Share learnings with the community
Validate the framework through practice
The Honest Tension

Measuring learning reveals gaps. Some organizations would rather not know. Adopters must be ready to see — and act on — uncomfortable truths.

What They Contribute

Real-world validation and sector-specific adaptations that prove TLQ works in practice.

Consulting & Advisory
Implementers
Expand ▼
The Role

Professionals who help organizations assess and improve their TLQ scores. They translate framework into practice.

Core Responsibilities
Assess client organizations using AOLRE
Design improvement roadmaps by dimension
Train internal teams to maintain TLQ
The Honest Tension

Consultants benefit from complexity. TLQ is simple by design. The tension is between billable hours and framework fidelity.

What They Contribute

Implementation expertise and cross-industry pattern recognition.

Technology Providers
Enablers
Expand ▼
The Role

Companies building tools that capture Activity, measure Output, and enable Learning infrastructure. The technology layer that makes TLQ measurable at scale.

Core Responsibilities
Build Activity capture into products by default
Connect Activity to Output in measurable ways
Enable Learning loops through product design
The Honest Tension

Shipping features is easier than shipping learning infrastructure. The tension is between velocity and measurement capability.

What They Contribute

The infrastructure that makes TLQ measurement automatic rather than manual.

Researchers & Academics
Validators
Expand ▼
The Role

Scholars studying organizational learning, validating TLQ against empirical evidence, and extending the framework through rigorous research.

Core Responsibilities
Study TLQ correlation with organizational outcomes
Validate the framework through peer-reviewed research
Extend understanding of each dimension
The Honest Tension

Academic rigor takes time. Practitioners need answers now. The tension is between certainty and utility.

What They Contribute

Empirical validation and theoretical depth that strengthen the framework's foundation.

Individual Practitioners
Champions
Expand ▼
The Role

People inside organizations who champion TLQ adoption — often without formal authority. The internal advocates who make change happen.

Core Responsibilities
Advocate for TLQ measurement internally
Pilot AOLRE in their own function
Document what works and what doesn't
The Honest Tension

Championing change is risky. If TLQ doesn't produce results, champions take the blame. The tension is between advocacy and career safety.

What They Contribute

Ground-level adoption that proves TLQ works from the inside out.

Community Contributors
Builders
Expand ▼
The Role

People contributing to TLQ's development — writing documentation, building tools, creating sector-specific applications, improving the framework itself.

Core Responsibilities
Contribute improvements to framework and tools
Create sector-specific implementations
Support new adopters in the community
The Honest Tension

Open source work is often unpaid. The tension is between contribution and compensation.

What They Contribute

The infrastructure, documentation, and tooling that makes TLQ accessible to everyone.

Five principles for framework development
# Principle Description
01 Open by Default The framework is freely available. No licensing, no certification requirements. Take it and use it.
02 Evidence Over Opinion Framework changes require evidence from real implementations. Theory is not enough.
03 Practitioner Voice People implementing TLQ have the strongest voice in its development. Users lead.
04 Simplicity Preserved Five dimensions. No more. Complexity is the enemy of adoption.
05 Continuous Learning The framework practices what it preaches. We measure our own AOLRE.
Register Your Organization → GitHub Discussions →