How I Built a 40% Response Rate with Multi-Dimensional Scoring

A pattern for building configurable scoring systems that actually work

Hand-drawn sketch of weighted scales with multiple dimensions feeding into a scoring dial with green accent

A pattern for building configurable scoring systems that actually work


When I started my job search, I was applying to everything that looked vaguely interesting. The result? A 5% response rate and a lot of wasted time on companies that were never going to be a good fit.

Six weeks later, my response rate hit 40%. The difference wasn't better resume writing or networking—it was a scoring system that helped me focus on the right opportunities.

This post isn't really about job searching. It's about a pattern that works anywhere you need to make consistent decisions across many options: lead scoring, content prioritization, feature selection, risk assessment. The domain is just the example.


The Pattern: CMF (Candidate-Market Fit) Scoring

The core insight: multi-dimensional scoring with configurable weights beats gut feel every time.

Here's the formula:

Total Score = Σ (Dimension Score × Weight)

Each dimension gets:

  1. A score (0-100) based on pattern matching
  2. A weight (how much this dimension matters)
  3. A threshold (minimum acceptable score)

The system outputs a single number you can sort by, plus a breakdown showing why something scored high or low.


The Dimensions (Job Search Example)

For my job search, I identified six dimensions:

DimensionWeightWhat It Measures
Sweet Spot Match32%Does this role match my target positions?
Company Tier18%Is this a company I'd be excited to join?
Role Level14%Is this the right seniority?
Location18%Does the location work for my life?
Freshness8%How recent is this posting?
Salary10%Does compensation meet my target?

The weights are crucial. Sweet Spot Match at 32% means a perfect role at an unknown company beats a mediocre role at a dream company. That's a choice, encoded in the system.


Scoring Logic: Pattern Matching

Each dimension uses pattern matching to assign scores. Here's how Sweet Spot works:

JavaScript
const SWEET_SPOTS = {
  strategyOps: {     // Primary target (60% of effort)
    score: 100,
    patterns: [
      /product\s+strateg/i,
      /product\s+ops/i,
      /head\s+of\s+product\s+ops/i,
    ],
  },
  foundingPM: {      // Secondary target (30% of effort)
    score: 90,
    patterns: [
      /founding\s+(pm|product)/i,
      /first\s+(pm|product)/i,
      /0-1\s+product/i,
    ],
  },
  corePM: {          // Acceptable (10% of effort)
    score: 75,
    patterns: [
      /senior\s+product\s+manager/i,
      /staff\s+product\s+manager/i,
      /ai\s+product/i,
    ],
  },
};

The system scans the job title and description, finds the first matching pattern, and assigns the corresponding score. No match? Default to 30 (unlikely fit).


The Tiering System

Some dimensions use tiered lookups instead of pattern matching:

JavaScript
const COMPANY_TIERS = {
  target: {      // Tier 1: Primary targets
    score: 100,
    companies: ['databricks', 'snowflake', 'stripe', 'anthropic'],
  },
  stretch: {     // Tier 1.5: Aspirational
    score: 95,
    companies: ['canva', 'figma', 'notion', 'openai'],
  },
  known: {       // Tier 2: Solid options
    score: 80,
    companies: ['vercel', 'supabase', 'airbnb', 'retool'],
  },
  unknown: {     // Tier 3: Default
    score: 50,
  },
};

This lets me pre-encode my research. A company I've never heard of gets 50 points. A company I've specifically researched and want to target gets 100.


Location Scoring: Priority Cascades

Location shows a different pattern—a priority cascade:

JavaScript
const LOCATION_SCORES = {
  denver:  { score: 100, patterns: [/denver/i, /colorado/i] },
  remote:  { score: 90,  patterns: [/remote/i, /hybrid/i] },
  boston:  { score: 70,  patterns: [/boston/i, /cambridge/i] },
  sfBay:   { score: 60,  patterns: [/san\s+francisco/i, /bay\s+area/i] },
  nyc:     { score: 50,  patterns: [/new\s+york/i, /nyc/i] },
  other:   { score: 30 },
};

Denver is home (100). Remote is almost as good (90). SF is acceptable but expensive (60). This encoding captures my real preferences in a way I can audit and adjust.


Time Decay: Freshness

Freshness shows a decay function:

JavaScript
function scoreFreshness(postedTime) {
  const hoursAgo = parsePostedTime(postedTime);

  if (hoursAgo <= 6) return 100;    // Just posted
  if (hoursAgo <= 24) return 80;   // Today
  if (hoursAgo <= 72) return 50;   // This week
  if (hoursAgo <= 168) return 30;  // Last week
  return 10;                        // Stale
}

Older jobs are more competitive. Fresh jobs mean I might be the first qualified applicant. The decay curve encodes this reality.


Putting It Together

The final calculation:

JavaScript
function calculateScore(listing) {
  const sweetSpot = scoreSweetSpot(listing.title, listing.company);
  const companyTier = scoreCompanyTier(listing.company);
  const roleLevel = scoreRoleLevel(listing.title);
  const location = scoreLocation(listing.location);
  const freshness = scoreFreshness(listing.postedTime);
  const salary = scoreSalary(listing);

  const weightedScore = Math.round(
    sweetSpot.score * 0.32 +
    companyTier.score * 0.18 +
    roleLevel.score * 0.14 +
    location.score * 0.18 +
    freshness * 0.08 +
    salary.score * 0.10
  );

  return {
    total: weightedScore,
    breakdown: {
      sweetSpot: { ...sweetSpot, weight: 32 },
      companyTier: { ...companyTier, weight: 18 },
      roleLevel: { ...roleLevel, weight: 14 },
      location: { ...location, weight: 18 },
      freshness: { score: freshness, weight: 8 },
      salary: { ...salary, weight: 10 },
    },
  };
}

The breakdown is as important as the total. When a job scores 45, I can see why: great company (90) but wrong role level (40) and bad location (30).


Threshold Filtering

The threshold is where magic happens. Mine is 65.

Below 65? Don't even look at it. Above 65? Worth a closer look. Above 80? Apply immediately.

This simple filter eliminated 60% of the noise. I went from reviewing 50 jobs/day to reviewing 20 qualified ones.


The Domain-Agnostic Pattern

Strip away the job search specifics, and here's the reusable pattern:

1. Identify Your Dimensions

What factors actually matter for this decision? Be specific. "Quality" isn't a dimension—"Has test coverage," "Under 500ms response time," and "Handles edge cases" are dimensions.

2. Assign Weights

Not all dimensions matter equally. Force yourself to allocate 100% across dimensions. This exposes hidden assumptions.

3. Define Scoring Logic

Each dimension needs a score(input) → 0-100 function. Use:

  • Pattern matching for text classification
  • Tiered lookups for categorical data
  • Decay functions for time-sensitive data
  • Range mapping for numerical data

4. Set Thresholds

What's the minimum acceptable score? What score triggers immediate action? Set these before you start scoring—gut feel during scoring leads to inconsistency.

5. Preserve the Breakdown

The total score is for sorting. The breakdown is for understanding. Always keep both.


This pattern works for:

Lead Scoring (Sales)

  • Dimensions: Company size, industry fit, engagement signals, timeline urgency, budget indicators
  • Weights: Budget (30%), timeline (25%), fit (25%), engagement (20%)
  • Threshold: Score > 70 = sales-qualified, > 50 = marketing-qualified

Content Prioritization (Product)

  • Dimensions: User request volume, strategic alignment, effort estimate, revenue impact
  • Weights: Strategic alignment (35%), user volume (25%), revenue (25%), effort-inverse (15%)
  • Threshold: Score > 75 = next quarter, > 60 = backlog

Feature Prioritization (RICE alternative)

  • Dimensions: Reach, impact, confidence, effort (inverted)
  • Weights: Configurable per team's priorities
  • Threshold: Based on capacity

The Code

The full scoring framework is available as a standalone npm package with examples for lead scoring, feature prioritization, and more.

GitHub: github.com/szoloth/multi-dimensional-scorer

Terminal
npm install multi-dimensional-scorer

Why This Works

Three reasons:

  1. Consistency: Every item gets scored the same way. No "I was tired that day" variance.

  2. Transparency: When someone asks "why did we prioritize X over Y?"—you can show the math. The weights and thresholds are explicit decisions, not hidden assumptions.

  3. Iteration: When your priorities change, you change the weights. The system adapts without rebuilding from scratch.


The Results

For my job search:

  • Response rate: 5% → 40%
  • Time spent reviewing: 4 hours/day → 45 minutes/day
  • Qualified applications submitted: 3x more
  • Interviews: 4x more (with better-fit companies)

The scoring system didn't make me a better candidate. It made me better at finding the right opportunities.


Try It

Pick a decision you make repeatedly. List the factors that matter. Assign weights. Build a simple scorer.

You'll be surprised how much clarity emerges when you force yourself to encode your priorities in code.