How to Price an AI Project Without Guessing

AI projects priced by the hour produce margin compression and scope wars. Pricing by outcome requires a discovery sprint. Here is the framework that converts business pain into a defensible number.

An AI services firm takes on a document classification project. The estimate is built on a rough understanding of the corpus size and a guess at the integration complexity. The estimate is too low. Data quality issues not visible in the scoping call consume three sprints. The integration schema is undocumented and requires reverse engineering. The project delivers at negative margin. The client is unhappy with the delay. The relationship ends before a second engagement begins.

This is the hourly pricing failure mode. The estimate was wrong because the information needed to price correctly was not gathered before pricing. The client was not the problem. The commercial structure was.

Why Hourly Pricing Destroys AI Projects

The hourly model has a structural defect that compounds in AI projects specifically.

The incentive structure is misaligned from the start. The client pays for effort. The vendor is rewarded for effort. Neither party has a structural incentive to find the simplest solution. A more complex solution means more hours for the vendor and a less efficient outcome for the client, but both parties are operating inside a framework that makes complexity invisible as a cost until the invoice arrives.

AI projects priced by the hour have an additional problem that does not apply to most software development: the effort is genuinely hard to estimate before a systematic investigation of the specific environment. Data quality varies enormously across organizations and is not visible from a sales conversation. Integration complexity depends on the age and documentation quality of existing systems. Edge cases in production surface over weeks of real usage, not during scoping. The honest estimate before discovery is a range, not a number, and a range does not close contracts.

The result is predictable. The estimate is either too low, producing margin compression, rushed delivery, and quality shortcuts, or too high, producing a negotiation that starts the relationship defensively and signals that the vendor does not know what the work involves.

Value-based pricing is not charging whatever the market will bear. It is anchoring the price to the business value being created, which requires knowing what that value is before writing the proposal. That knowledge comes from discovery.

Discovery Is the Prerequisite for Pricing

The discovery sprint produces the five outputs required to price accurately.

A process map of the target workflow, including the steps before and after the AI intervention, the systems involved, the people accountable, and the decision points where the AI output will be used. Without the process map, the scope is ambiguous in ways that surface as scope disputes after the project starts.

A data inventory covering the sources the AI will need to access, their format, their quality, their access controls, and the gaps between what is available and what the system needs. Data quality issues identified during discovery are scoped and priced. Data quality issues discovered in delivery become margin compression.

A decision owner map identifying who uses the AI output to make decisions, what decisions they make with it, and what the cost of a wrong output is in each case. This is the foundation of the value estimate.

A value estimate quantifying what the process currently costs the organization in time, errors, or missed revenue per year. The value estimate is the pricing anchor. Without it, the price is a number pulled from a benchmark or a competitor’s proposal. With it, the price is a defensible fraction of a validated business impact.

A risk inventory cataloguing the technical risks, the organizational change requirements, the integration dependencies, and the data quality gaps that could affect delivery. The risk inventory is what allows the proposal to include appropriate buffers and scope exclusions without those provisions feeling adversarial.

Discovery is a paid engagement. It is not a free scoping exercise provided to close the project contract. It produces a deliverable the client owns: the five-output document that gives them everything they need to evaluate whether the follow-on project is worth doing, with or without this vendor. Paying for discovery is a signal that the client values structured analysis. It is also the gate that filters clients who want a proposal within forty-eight hours from clients who understand that the proposal quality depends on the investigation quality.

The Value Estimation Framework

Value-based AI pricing shown as three measurable levers: time recovered, error cost avoided, and revenue or capacity unlocked.

Three value levers are quantifiable in discovery for most enterprise AI projects.

Time recovered is the most direct. Identify how many hours per week the target process currently consumes across the people involved. Multiply by fully loaded cost. The result is the annual cost of the current process. An AI system that reduces that cost by thirty percent delivers a value that is calculable before the project starts.

Error cost is often larger than time cost and more defensible as a pricing anchor. Identify the cost of errors in the current process: rework hours required to correct mistakes, compliance penalties associated with specific error types, customer churn attributable to errors in service delivery, delayed decisions caused by incomplete or incorrect information. Error cost is often the number that produces the “I did not realize it was that expensive” response from the economic buyer, because error costs are distributed across multiple teams and rarely aggregated.

Capacity unlocked is the hardest to quantify and often the most significant. What decisions or processes are currently not happening because the team does not have time or information? A sales team that cannot analyze all incoming RFPs is leaving revenue unqualified. A compliance team that cannot review all contracts is carrying risk they cannot see. The capacity unlocked by an AI system that handles the volume the team cannot is real value that the organization is not currently capturing.

The estimate does not need to be precise. It needs to be directionally correct and defensible in a conversation with the economic buyer. The purpose is not to produce an exact number. It is to establish a validated order of magnitude that makes the project price look rational relative to the value being created.

The pricing multiple from discovery findings: an AI project is typically priced in the range of 10 to 30 percent of annualized value recovered in the first year. An ongoing retainer is priced in the range of 5 to 15 percent per year thereafter. These are framework guidelines, not market benchmarks. The right multiple depends on competitive alternatives, delivery risk, and the relationship context. The ranges exist to anchor the conversation with the economic buyer, not to constrain the negotiation.

Scoping for Price Protection

The scope of work is the primary instrument for protecting both parties in an AI project.

What is included must be explicit and granular: the specific processes in scope, the specific data sources, the specific system integrations, the specific output types, the specific languages and formats the system will handle. The more explicit the inclusion list, the less room for scope disputes.

What is explicitly excluded is as important as what is included. The processes not in scope. The integrations not covered in this project. The document formats not handled. The languages or geographies not included. Explicit exclusions prevent the conversation from becoming “why didn’t you include X?” They establish “X was not in scope, here is the change request process if you want to add it.”

Acceptance criteria define how success will be measured and by whom. The eval harness metrics, the accuracy thresholds, the latency requirements, the user acceptance test protocol. Without acceptance criteria, delivery becomes subjective. With them, both parties have an agreed definition of done.

Change request triggers define what constitutes a scope change versus what is included in normal project delivery. Any work outside the defined scope is a change request with its own estimate and its own approval process. This protects both parties from the scope drift that compresses margins and damages relationships.

A 30 to 40 percent buffer should be built into the project price for the edge cases and integration complexities that are not visible in discovery but are predictable in the aggregate. This is not waste. It is the honest cost of operating under genuine uncertainty in a domain where data quality and integration complexity are only fully visible in production. The buffer is disclosed to the client as part of the pricing rationale, not buried in the estimate.

Qualifying the Client Before Pricing

Not every client is worth the effort of a proposal.

The qualification conversation identifies two things: whether the client’s problem is one the system can solve, and whether the client’s expectations are compatible with how good AI delivery actually works.

Green flags: the economic buyer is in the room for discovery conversations. The organization has already tried to solve the problem and knows what failure looks like. There is a named decision owner who will act on AI output. Budget has been approved, not just discussed in general terms.

Red flags: the client expects a fixed-price project with unlimited revisions and no change request process. The economic buyer is absent from scoping conversations, with a delegate who cannot make commitments. The organization wants AI for competitive signaling rather than operational improvement, meaning success is defined by announcement, not by metrics. The stated budget is below the minimum viable project for the identified problem.

The qualification question that separates good clients from bad ones: “If we showed you at month three that the system is not performing against the agreed metric, what would you do?” A client who understands outcome-based delivery says “let’s investigate why and fix it.” A client who is buying a deliverable says “that’s the system I paid for.” The second client is not a good candidate for a relationship where the price is anchored to value.

Pricing as a Signal of Confidence

Underpricing an AI project is not a competitive advantage. It is a signal that the vendor does not understand the value they are creating.

Clients who are evaluating on price alone are buying a commodity. They are selecting based on the lowest number in a spreadsheet. That is not the client for a premium AI partner, and it is not the engagement that produces the outcome track record that allows a firm to increase prices over time.

The pricing conversation, done well, is a demonstration of the analytical capability being sold. A partner who can quantify the value of a process improvement, scope the work to protect that value, and defend the price with a discovery-grounded estimate is showing the economic buyer exactly what they will get for the money: structured analysis, defensible numbers, and a clear line between what is included and what is not.

The right client responds to this with confidence, not with a request to reduce the scope until the price fits a pre-set budget that was not anchored to the value. The budget that was not anchored to the value is a number someone picked before the conversation. The price anchored to the value estimate is a fraction of a number the client now understands. That is the conversation where the right clients close.


The discovery sprint is the first engagement Terraris.ai offers because it is the only way to price what follows accurately. If you are building the commercial structure for your AI practice, or evaluating how to price an engagement you are currently scoping, the framework starts here.