Back to Resources
R&D Productivity February 24, 2026

10 SR&ED Examples for Software Companies: What Actually Qualifies

Abstract eligibility criteria are hard to apply to real work. These 10 SR&ED examples show what qualifies, what doesn't, and why.

CI

Chrono Innovation

Engineering Team

The three-part eligibility test for SR&ED is clear in the abstract: technological uncertainty, systematic investigation, technological advancement. In practice, applying those criteria to a sprint board full of real engineering work is harder than it looks.

The most useful thing for most software companies isn’t a deeper explanation of the criteria. It’s concrete examples showing where the line falls and why. These 10 examples cover the spectrum from clearly qualifying to clearly excluded, with the reasoning CRA applies to each. For the full eligibility framework, see SR&ED Eligible Activities for Software Companies.

Spectrum of qualifying versus non-qualifying SR&ED work in software development

Example 1: Novel Real-Time Anomaly Detection Algorithm

What the team did: An e-commerce platform needed to detect fraudulent transactions in under 50ms to avoid blocking legitimate checkout flows. Existing fraud detection approaches either exceeded the latency target or produced false positive rates that would reject too many real customers. The engineering team spent three months investigating whether a hybrid approach combining statistical baselines with lightweight ML inference could satisfy both constraints simultaneously.

Does it qualify? Yes.

Why: The uncertainty was genuine. No published approach addressed the specific combination of latency target and false positive tolerance under their transaction volume. The investigation was systematic: the team formulated specific hypotheses about which model architectures could fit within the latency budget, tested them against defined benchmarks, and documented why each rejected approach failed. The resulting algorithm addressed a constraint space no existing published solution covered.

The documentation that matters: The benchmark methodology used to evaluate each approach. The specific latency and false positive measurements at each stage. The written explanation of why published approaches were insufficient.

Example 2: Implementing a Standard Recommendation Algorithm

What the team did: A retail company added product recommendations using collaborative filtering, a well-documented technique with extensive open-source implementations. Engineers spent six weeks integrating a recommendation library, tuning parameters, and A/B testing different placements.

Does it qualify? No.

Why: The solution was determinable by a qualified practitioner from existing knowledge. Collaborative filtering is well-documented. Libraries implementing it are widely available. Tuning parameters and testing placements is standard practice. A competent engineer could have predicted, with reasonable confidence, how to implement this based on existing public resources.

The line: Applying known methods, even skillfully, is not SR&ED. The uncertainty must be genuine. The answer must not be knowable from existing literature.

Example 3: Investigating Memory Corruption in a Custom Runtime

What the team did: A fintech company built a custom transaction processing runtime in C++ for performance. After deployment, they encountered intermittent memory corruption causing silent data errors under specific load conditions. Standard debugging tools couldn’t reproduce the issue deterministically. The team spent eight weeks developing custom instrumentation to capture memory state at microsecond intervals, testing multiple hypotheses about race conditions in their lock-free data structures, and ultimately identifying an interaction between their memory allocator and the processor’s cache coherency behavior.

Does it qualify? Yes.

Why: A genuinely difficult engineering problem requiring original investigation. The interaction between the custom allocator and cache coherency behavior wasn’t documented in existing literature for their specific architecture. The investigation was systematic: hypothesis formulation, custom instrumentation, controlled experiments, documented results. The advancement, understanding how a specific class of lock-free data structures interacts with cache coherency under their conditions, was real technical knowledge that didn’t exist before.

The documentation that matters: The specific hypotheses tested and experiments designed for each. The instrumentation built for the investigation. The progression of understanding as each approach was tested.

Example 4: Bug Fix Using Standard Debugging Procedures

What the team did: Engineers spent 40 hours debugging a race condition in a Node.js API service causing intermittent 500 errors under concurrent load. They used standard tools (Node.js inspector, async stack traces, a load testing harness), identified a missing mutex around a shared cache, and fixed it.

Does it qualify? No.

Why: The debugging process applied established tools and methods to a known class of problem. Race conditions in concurrent programming are well-understood. The solution, protecting shared state with a mutex, is standard practice. A qualified engineer applying known techniques would have resolved this.

The distinction: The work was technically difficult and required skilled engineers. Difficulty is not the criterion. Uncertainty is. If a skilled practitioner could solve it by applying known techniques, it doesn’t qualify regardless of how much time it took.

Example 5: Building a Novel Data Compression Format for Edge Devices

What the team did: An IoT company needed to transmit telemetry data from devices with 8KB RAM and 2G connectivity. Existing compression formats were too computationally expensive for the hardware or produced insufficient compression ratios for their data structure. The team spent five months developing and validating a custom compression scheme adapted to the statistical properties of their sensor data, running on hardware with strict memory constraints.

Does it qualify? Yes.

Why: The uncertainty was real. No existing format satisfied both the memory budget and compression ratio requirements for their specific data type. The investigation was systematic, with defined performance targets, controlled benchmarks, and documented experimental results for each approach tried. The resulting format is genuinely novel: a technical artifact addressing a constraint space not covered by existing public knowledge.

The documentation that matters: The specific hardware constraints and performance targets. Benchmarks against existing formats demonstrating their inadequacy. Experimental results at each development stage.

Example 6: Tuning ML Model Performance Using Standard Techniques

What the team did: A SaaS company fine-tuned a pre-trained language model on their domain-specific dataset. Engineers ran standard hyperparameter optimization, learning rate schedules, batch sizes, regularization parameters, and evaluated model quality on a held-out test set. Two months of work.

Does it qualify? No.

Why: Fine-tuning pre-trained models using hyperparameter optimization is established practice. The techniques are documented, tools are widely available, and a qualified practitioner would know how to approach this from existing resources. Applying known methods to new data doesn’t create technological uncertainty.

The potential exception: If fine-tuning revealed a specific failure mode requiring original investigation to diagnose and address, something published literature didn’t cover, that specific investigation might qualify. The routine fine-tuning does not.

Example 7: Designing a Novel Distributed Consensus Mechanism

What the team did: A blockchain infrastructure company needed consensus with sub-two-second finality while tolerating up to 33% Byzantine node failures. Existing protocols (PBFT, Tendermint, HotStuff) couldn’t satisfy both constraints simultaneously at their required network scale. The team spent eight months investigating modifications to HotStuff, testing security properties against formal threat models, and validating performance through simulation.

Does it qualify? Yes.

Why: The constraint space wasn’t satisfied by existing published protocols. The investigation was rigorous and systematic, involving formal security analysis alongside empirical performance validation. The result extends the state of knowledge in distributed systems.

The documentation that matters: The constraint analysis demonstrating existing protocols’ inadequacy. Formal security analysis for proposed modifications. Simulation methodology and results.

Example 8: Integration of Existing APIs for a New Product Feature

What the team did: A project management SaaS company added a Slack integration. Engineers spent three weeks studying Slack API documentation, implementing webhook handling, building connection settings UI, and testing end-to-end.

Does it qualify? No.

Why: Integrating well-documented APIs is standard practice. The Slack API is thoroughly documented. A qualified developer would know how to implement this by following documentation. No technological uncertainty, no investigation beyond reading and applying existing resources.

A note on exceptions: If the integration revealed undocumented API behavior requiring original investigation, that specific investigation might qualify. The integration itself doesn’t.

Example 9: Developing a Custom Query Optimizer for Sparse Datasets

What the team did: A geospatial analytics company needed analytical queries over sparse, irregularly distributed geographic data. Existing database query planners (PostgreSQL, BigQuery, Snowflake) produced plans that performed poorly because their cost models assumed data properties that didn’t hold for sparse geospatial data. The team spent six months investigating whether a custom query planning layer, aware of their specific data distribution, could significantly outperform general-purpose planners. They formulated cost model hypotheses, built a prototype planning layer, and validated performance against production-representative benchmarks.

Does it qualify? Yes.

Why: The hypothesis was specific and testable. The failure of general-purpose planners was documentable. The investigation addressed a constraint not resolved by published query optimization literature for their case. A domain-specific planning layer with modified cost models represents genuine advancement.

The documentation that matters: Benchmark results demonstrating general-purpose planner inadequacy. Specific cost model hypotheses tested. Performance measurements at each investigation stage.

Example 10: Failed Investigation That Produced No Working Solution

What the team did: A healthtech company investigated whether neural-symbolic hybrid reasoning could improve diagnostic accuracy on their clinical dataset beyond what pure ML approaches produced. Four months testing five different hybrid architectures, running controlled experiments against a clinical benchmark. None outperformed the existing ML model. The investigation was abandoned.

Does it qualify? Yes.

Why: SR&ED doesn’t require success. It requires genuine investigation. The technological uncertainty was real: it wasn’t known whether hybrid reasoning would outperform pure ML, and a qualified practitioner couldn’t have determined that from existing literature. The investigation was systematic: defined hypotheses, controlled experiments, documented results. Discovering that existing hybrid approaches don’t outperform their baseline on this task is itself technical knowledge. CRA explicitly recognizes that failed experiments can qualify.

The documentation that matters: The specific architectures evaluated and the hypothesis behind each. Benchmark methodology for comparison. Experimental logs documenting what was tried and observed. A clear narrative of why each approach was abandoned.

Reading the Pattern

Looking across these 10 examples, the qualifying cases share three properties.

The uncertainty was genuine and specific. Not “we weren’t sure if it would work” but “this specific constraint combination is not addressed by published approaches, and here’s the evidence.”

The investigation was structured. Not “we tried some things” but a documented progression of hypotheses tested, results observed, approaches refined or ruled out.

The documentation existed at the time. The narrative of what was tried and why was captured during the investigation, not reconstructed from memory at filing time.

The non-qualifying cases fail on the first criterion: the answer was knowable from existing resources. Applying known algorithms, implementing documented APIs, tuning established model architectures. These are skilled engineering, not technological research.

The most common SR&ED mistake software companies make is filing based on effort and complexity rather than uncertainty and investigation. Hard work is not SR&ED. Research is. The distinction: whether a qualified engineer, armed with existing knowledge, could have predicted the outcome before the work began.

Want help identifying qualifying work in your engineering team? Talk to our team about automating SR&ED documentation from your development tools.

#sred-examples #sred-eligibility #software-rd #cra #tax-credits #r-and-d
CI

About Chrono Innovation

Engineering Team

A passionate technologist at Chrono Innovation, dedicated to sharing knowledge and insights about modern software development practices.

Ready to Build Your Next Project?

Let's discuss how we can help turn your ideas into reality with cutting-edge technology.

Get in Touch